ITIF Logo
ITIF Search
Generative AI Is the Next Challenge for Section 230

Generative AI Is the Next Challenge for Section 230

April 12, 2023

One of the most contentious U.S. laws of the Internet age, Section 230 of the Communications Decency Act, will face yet another challenge as content creation continues to evolve with emerging technology. Artificial intelligence (AI), and specifically generative AI, is poised to transform the way online services deliver content to users. In order to avoid an influx of lawsuits targeting online services that would stifle innovation and ultimately harm consumers, Congress should expand Section 230 to protect AI-generated content.

Section 230 protects online services and users from facing legal consequences for hosting or sharing third-party content, and it was partly responsible for creating the modern Internet. The future of Section 230 is already uncertain after the law has drawn criticism from both sides of the aisle, multiple attempts at reform or repeal from Congress, and criticism from former presidents Obama and Trump and President Biden. Meanwhile, the Supreme Court may limit the statute in its decision on the court case Gonzalez v. Google later this year.

Generative AI—AI systems that can produce novel content such as text, images, or music in response to user prompts—has proven equally controversial and equally misunderstood. Much of the debate around generative AI has focused on the intellectual property implications of the technology, but as search engines such as Google, Bing, and DuckDuckGo begin to experiment with generative AI, online services will need to navigate questions of how Section 230 applies to AI-generated content.

Section 230 states that, “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” This applies to many different forms of third-party content, including search results. When a search engine displays results to a user query, the links and snippets provided are third-party content. If, for example, that content includes defamatory statements, the search engine is not liable for defamation because it had no hand in creating the content.

On the other hand, if a search engine uses generative AI to respond to user queries, and the AI-generated content contains defamatory statements, Section 230 may not apply. Since generative AI by definition produces novel content, rather than simply repackaging third-party content, it is difficult to imagine search engines successfully arguing that Section 230 shields them from liability for the content their AI systems produce.

The authors of Section 230, Sen. Ron Wyden (D-OR) and former Rep. Chris Cox, shared their opinion that Section 230 should not apply when online services create content using generative AI. Similarly, Supreme Court Justice Neil Gorsuch alluded during oral arguments for Gonzalez v. Google that he, too, thought AI-generated content fell outside the scope of Section 230.

As more online services adopt generative AI, the lack of Section 230 coverage will likely lead to an influx of expensive lawsuits, causing online services to divert resources away from innovation and toward covering their legal expenses. Even if these lawsuits ultimately fail on First Amendment or other grounds, the lack of coverage by Section 230 means that companies using generative AI would not be able to get them quickly dismissed. Some online services may choose not to use generative AI due to the risk of liability and associated legal costs, cutting users off from new ways of receiving information and creating content. Finally, higher legal costs may cause online services to start charging more, or charging for services that were previously free, to the detriment of users, particularly low-income users.

In order to avoid this, Congress should amend Section 230 to shield online services and users from liability from content that is automatically generated using “information provided by another information content provider.” This approach would recognize that generative AI is not unlike a search engine in the way it produces content: both require user input and generate results based on third-party information from outside sources. This approach would also ensure that online services can continue to experiment with emerging technologies and provide innovative and low-cost services for their users.

Finally, this approach would preempt frivolous lawsuits, including those where users intentionally attempt to get an AI system to produce defamatory content. If an AI system produces content that accurately summarizes a defamatory statement by a third-party speaker, clearly the online service that hosted that content should not liable. Even if an AI system erroneously produced false or defamatory content about someone that was not based on third-party information, as is likely to happen in the early stages of developing and deploying an emerging technology, the AI system would have done so without malice—a key criteria to prove defamation against public figures—and thus these defamation suits would likely be ultimately unsuccessful, while instances of AI systems producing unprompted false or defamatory content about private figures are likely to be rare. As is the case with online content produced by humans, there is bound to be some harmful content, and balancing these risks against the potential benefits of generative AI is key for allowing innovation to continue.

Given open hostility to tech companies among lawmakers, it is unlikely that Congress would broaden Section 230 any time soon. Yet, U.S. competitiveness in AI is essential, especially to compete against China, and therefore members of Congress should consider the bigger picture and eliminate barriers to broader adoption of AI.

Back to Top