WASHINGTON—Section 230 of the Communications Decency Act—the foundational law that governs intermediary liability for online services and Internet users in the United States, thus shaping the modern Internet economy—is the subject of an increasingly contentious, high-stakes debate in Congress. The heart of the issue is the question of who is responsible for illegal online speech. But especially in the aftermath of the January 6 attack on the U.S. Capitol, Section 230 has become a political football, rolled into heated debates about the First Amendment, political speech, and social media—issues that are only tangentially related to intermediary liability.
Against this backdrop, a new series of five reports released today by the Information Technology and Innovation Foundation (ITIF), the leading think tank for science and technology policy, concludes that Section 230 reform should preserve the fundamental principle that liability for online content resides with its creator, while also ensuring that online platforms are held responsible for their own conduct.
In a comprehensive review and analysis of the issue and the most prominent arguments and proposals that have been put forward, ITIF concludes it is important for reforms to preserve online innovation, encourage content moderation, avoid targeting lawful speech, and maintain a consistent national standard for online intermediary liability.
“Section 230 was a visionary law. We shouldn’t do away with it or change it haphazardly, but we shouldn’t treat it as sacrosanct, either,” said ITIF Vice President Daniel Castro, who co-authored the new reports. “Many of the proposals to eliminate or alter Section 230 would undermine online services and pose a major setback for free speech and innovation, but that doesn’t mean there shouldn’t be some targeted reforms. The best path forward would be to narrow the liability shield so it doesn’t protect bad actors, and establish a voluntary safe harbor to minimize nuisance lawsuits and avoid negative spillover effects on innovation.”
ITIF’s comprehensive analysis unfolds in a series of five reports:
- Overview of Section 230: What It Is, Why It Was Created, and What It Has Achieved
- The Exceptions to Section 230: How Have the Courts Interpreted Section 230?
- Fact Checking the Critiques Section 230: What Are the Real Problems?
- How Other Countries Have Dealt With Intermediary Liability
- Proposals to Reform Section 230
After the first two reports provide an overview of Section 230’s history and reviews how courts have interpretted the law, the third report parses the most common critiques of the law, finding that some stem from fundamental misconceptions, though most are based on legitimate concerns. The fourth report in the series then reviews other countries’ approaches to the issue of intermediary liability. Finally, the fifth report catalogues the pros and cons of the major proposals that advocates have advanced in the United States.
Based on this comprehensive analysis, ITIF concludes that Congress should take the following steps to modernize Section 230:
- Establish a good faith requirement to prevent bad actors from taking advantage of Section 230(c)(1)’s liability shield.
- Establish a voluntary safe harbor provision to limit financial liability for online services that adhere to standard industry measures for limiting illegal activity.
- Expand federal criminal laws around harmful forms of online activity that are also illegal at the state level.
Notably, as explained in the fifth report, establishing a good faith requirement or a safe harbor provision would be problematic on their own. But if pursued jointly as part of a targeted Section 230 reform, they would address the weaknesses of implementing either proposal independently.
“If done correctly, reforming Section 230 could address many harms on the Internet,” said Ashley Johnson, who co-authored the reports with Castro. “But it’s important to recognize that getting online intermediary liability right doesn’t resolve the ongoing debate about political speech on social media. That issue is really about the First Amendment and the appropriate set of rules and societal norms to moderate political speech on large social media platforms. It’s a separate debate.” ITIF will address that issue in a separate report that is forthcoming.