ITIF Logo
ITIF Search
Blame Lawmakers, Not AI, for Failing to Prevent the Fake Explicit Images of Taylor Swift

Blame Lawmakers, Not AI, for Failing to Prevent the Fake Explicit Images of Taylor Swift

January 28, 2024

Internet users shared graphic fake images of Taylor Swift millions of times on social media and other online platforms this past week, sparking renewed concerns about nonconsensual pornography and calls for legislation to address the issue. People are right to be outraged because this problem is neither new nor unexpected, and it is one that policymakers could have done more to address with relative ease. But while AI has made it easier to create these types of images, the real questions are not about AI but rather about the willingness of lawmakers to protect victims—primarily women—of this type of online harassment and abuse.

To be clear, AI-enabled services have made it easier than ever for Internet users with minimal technical skills to create sexually explicit images and videos of individuals without their consent. These online services allow users to upload a photo and with only a few clicks create a nude image of that person. Anyone can become a target, from celebrities and politicians to coworkers and classmates. Moreover, these services have become progressively more realistic and customizable over time, integrating the same technical advances in machine learning that have powered the rise of generative AI tools like ChatGPT and Dall-E that allow users to create novel media based on specific user prompts.

Concerns about deepfake pornography have been growing for years. In 2018 a video circulated on the Internet that appeared to depict Gal Gadot, the star of the Wonder Woman movie franchise, in a pornographic video. As the capability to use AI to generate fake media matured, researchers quickly discovered that many Internet users were primarily using it for seedy purposes. A 2019 study found that the overwhelming majority of deepfakes on the Internet—96 percent—were pornographic and mostly depicting women.

While AI is the shiny new object that has attracted a lot of attention, the underlying issue has always been the problem of image-based abuse—when people share, or threaten to share, nude or sexual images about another person without their consent. This problem predates the Internet—Marilyn Monroe, Jackie Kennedy, and Vanessa Williams famously had their nude photographs published in Playboy, Hustler, and Penthouse, respectively, without their permission. In the Internet era, where anyone can create a website and become a global publisher or share content on social media, the problem has become much more pervasive. Many individuals (again, mostly women) have discovered intimate images and videos of themselves shared online without their permission—commonly referred to as “revenge porn”—with sometimes devasting consequences for them both personally and professionally.

For many years, victims of such abuse had little recourse and critics often blamed them for creating the compromising digital media in the first place. However, public opinion has shifted and lawmakers in 48 states have now enacted laws criminalizing the nonconsensual distribution of intimate images, and these laws have successfully withstood multiple challenges arguing they were unconstitutional infringements on free speech. In addition, when Congress reauthorized the Violence Against Women Act in 2022, it added a provision creating a civil right of action allowing victims to sue when someone shares intimate images of them without their consent.

But the impact of these laws has been mixed. While many victims have found justice, others have not. Part of the problem is that the laws vary by state. For example, some laws treat it as a misdemeanor, while others treat it as a felony. Some focus on intent (i.e., whether the perpetrator acted with intent to harm the subject), while others focus on consent (i.e., whether the perpetrator acted without the subject’s consent). These laws also face the reality that the anonymity of the Internet means that people who distribute this content may sometimes never be identified.

But there is more policymakers can and should do. First, while the Violence Against Women Act Reauthorization created civil liability at the federal level, it did not create criminal liability. The SHIELD Act, first introduced in Congress in 2016, would do exactly that, and it has bipartisan support in the U.S. Senate. Creating a strong federal law that criminalizes nonconsensual distribution of intimate images would make it easier for victims to pursue justice no matter which state they live in. Giving federal law enforcement agencies more resources to pursue these crimes would also help.

Second, most of the laws criminalizing revenge porn do not include computer-generated images and only about a dozen states have updated their laws to close this loophole. Here too, Congress has an opportunity to act by creating a federal statute that prohibits such activity. H.R.3106, the Preventing Deepfakes of Intimate Images Act, introduced in May 2023 by Rep. Joseph Morelle (D-NY) would update the Violence Against Women Act to extend civil and criminal liability to anyone who discloses or threatens to disclose digitally created or altered media containing intimate depictions of individuals with the intent to cause them harm or with reckless disregard to potential harm. As of January 2024, the legislation has nearly two dozen cosponsors from both sides of the aisle.

Unfortunately, given widespread fears about AI and backlash against the tech industry, some critics are quick to point the finger at AI. For example, reacting to a question about the fake Taylor Swift images, White House Press Secretary Karine Jean-Pierre said, “the president is committed, as you know, to ensuring we reduce the risk of generative AI.” However, regulating AI will not address this problem because mainstream AI services already prohibit this type of content. For example, OpenAI prohibits both “impersonating another individual or organization without consent” and “sexually explicit or suggestive content.” And unless policymakers ban generative AI entirely, the underlying technology—which is publicly available to run on a personal computer—will always be around for bad actors to misuse.

Moreover, as history has shown, this problem did not start with AI. Nor has the technology industry ignored the problem. Major platforms like Google and Meta now have dedicated tools for users to report intimate personal images that they need removed swiftly. Reddit, one of the original hubs for distributing hacked celebrity photos in 2014 and later a source for pornographic AI-generated deepfakes, has in more recent years implemented stringent new rules and methods to detect and respond to non-consensual image sharing.

In this case, the problem is not that technology is moving too fast, it is that lawmakers are moving too slowly. While it may be too much to expect abusers and Internet trolls to stop this type of activity entirely, it is reasonable to expect that those who distribute this content should face significant civil and criminal liability. Given Taylor Swift’s fame and popularity, hopefully, this most recent incident will spur Congress to finally move forward with a legislative solution focused on stopping perpetrators of revenge porn not curtailing the development of AI.

Editors’ Recommendations

Back to Top