ITIF Logo
ITIF Search
Selective Outrage Over AI and Copyright

Selective Outrage Over AI and Copyright

February 20, 2025

As the UK debates whether to reform copyright law for AI training, moral grandstanding is in full swing. Among the leading voices is Member of the House of Lords Baroness Kidron, who recently denounced the government’s proposal to allow AI systems to train on copyrighted material from the Internet unless a creator opts out. She described the proposal as a “forced marriage on slave terms,” arguing that the government has drunk Silicon Valley’s proverbial Kool-Aid, seduced by promises of economic gain, and embraced an arrangement that lets Big Tech cash in on creative labor while stripping creators of their property rights. According to like‑minded critics, the only safeguard is to cling to “a gold-standard UK copyright law that has existed since 1709.

However, this portrayal obscures reality and history: The U.S. copyright system was cut from the UK’s cloth, and when America inherited it in 1790, British copyright tradition focused primarily on protecting economic interests, controlling reproduction, and ensuring financial benefits for copyright holders—it placed little to no emphasis on authors’ personal rights over how their works were used (unlike the Continental system).

While the UK has revised its copyright laws over time, including in 1911, 1956, and 1988, these changes have largely preserved the system’s core focus on economic interests over expansive rights for individual creators. Even when the UK formally introduced new creator rights in 1988 (e.g., the right to be correctly credited and the right to prevent certain changes to a work), they remained narrow and, in many cases, waivable. That history matters because it shows that UK copyright law isn’t some purist, inviolable system—it has long chosen to center economic realities while balancing creative interests.

Importantly, the long-standing balance in UK copyright law has never prevented the dynamics critics now decry but rather shaped who is most exposed to them. For decades, many of the non-AI industries Kidron explicitly defends have drawn on creative works under copyright rules that have allowed them to observe, extract patterns from, and build upon creative contributions from online communities without requiring permission.

Consider the entertainment industry. Gaming companies like Epic Games have made millions selling emotes, many of which are virtual recreations of viral dance moves originally created by Black artists and posted online. While the individual videos demonstrating these moves might be copyrightable, the dance moves themselves are not because both UK and U.S. copyright law only protect choreography if it is in a fixed form.

This means that while a fixed, choreographed work such as the 2024 staging of The Nutcracker by The Royal Ballet can be fully copyrighted, many of the improvisational, genre-defining forms that have shaped popular culture, particularly those originating from Black communities, cannot. That legal distinction in the law means that any company can scour the Internet, figure out which trends gain traction online, replicate them into purchasable emotes or commercial products, and monetize styles without ever copying an actual recording.

No industry has been more adept at leveraging online content without permission, quite like advertising though. In June 2014, Kayla Lewis coined the phrase “on fleek” in a six-second Vine clip. The video itself was likely copyrightable, but the phrase was not because both UK and U.S. copyright law protect the original expression of ideas, but not ideas themselves.

Scholars have been writing for decades about how this distinction disproportionately disadvantages creators from marginalized communities. Still, in the months that followed, advertisers leveraged “on fleek” to sell hundreds of products. From restaurant chains using it in social media marketing campaigns to clothing companies sprawling the term over any and all apparel, multinational conglomerates sold an untold number of products by commodifying Lewis’ contribution without her permission and without compensation.

The music industry, which has also been among the critics of AI training, has long relied on a system that permits creative borrowing without permission, making its moral argument inconsistent. Paul McCartney has been among the most vocal opponents of AI training, warning that it will allow artists’ work to be “ripped off.”

However, this concern conveniently ignores a long-standing legal distinction in copyright law—one that allowed The Beatles, and the entire music industry, to freely borrow from and profit off musical styles rooted in historically marginalized communities without permission or recourse. Neither U.S. nor UK copyright law protects stylistic expressions, only specific recordings and compositions, which is why McCartney’s imitation of Little Richard’s signature whooping vocals on She Loves You in 1964 is generally not actionable in either country.

Critics may argue that this is an apples-to-oranges comparison, insisting that earlier permissionless use applied only to unprotected works while AI training now involves copyrighted material. But this distinction misses the point entirely.

Opponents of UK reform are not just making a legal argument, they are framing it as a moral one, casting AI training as a violation of fundamental rights akin to forced labor. In doing so, they suggest that allowing AI to learn from creative labor without permission is an abandonment of UK principles, a capitulation to a system they imply is rooted in American-style exploitation. Yet history shows these types of exceptions are, and have long been, a fundamental part of UK copyright law.

Moreover, to call the government’s proposal a “forced marriage on slave terms” is reckless—not only because it is oblivious to the kinds of creative work that copyright has historically not protected, but also because it misrepresents what is actually changing: The proposal is not about altering limits on how much control creators have over their work; those limits have long been in place. Instead, it seeks to impose new restrictions on AI training—restrictions that have never applied to traditional industries.

If a more updated view of fairness is truly sought—one that changes what can be learned from creative works—that would require departing from long-standing copyright principles, a shift that, if pursued, should be applied uniformly across all industries rather than selectively restricting AI training. Conversely, if policymakers’ goal is to preserve existing copyright principles, they should acknowledge the reality: the law forbids copying but permits learning from creative works, while also limiting how much control creators have over their work. AI should not be treated differently.

Whatever one thinks of the dynamics of creative use and profit, they have existed for decades as part of a system that was rarely questioned, even when it operated in ways that were uneven. To suddenly declare them intolerable, only when status quo actors are on the other side is not a principled defense of creators; it simply reflects a discomfort with disrupting a familiar order. Lawmakers should look past the moral theater of different industries accusing one another of exploitation and engage in a clear-eyed debate that doesn’t mistake shifting economic dynamics with fundamental rights.

Back to Top