ITIF Logo
ITIF Search
What We Learned When We Broke the Internet: Insights From the Section 230 Debate

What We Learned When We Broke the Internet: Insights From the Section 230 Debate

Section 230 of the Communications Decency Act has quickly transformed from an obscure law known mostly to tech policy experts to a regular feature in public debates and policy discussions. What is this brief, 25-year-old law making headlines, and how could reforming or repealing it impact our daily lives?

Over the last six weeks, we have taken a deep dive into the debate surrounding Section 230, from the history of its interpretation in U.S. courts to its wide-ranging impact on society, both online and off. In our limited podcast series Ellysse and Ashley Break the Internet, we talked to a dozen experts about this important and increasingly contentious law in an effort to draw out the main arguments on both sides of the debate and highlight key considerations for the policymakers, advocates, and experts working on questions of intermediary liability and online content.

Here are some of the takeaways from our interviews, along with some important questions that remain unanswered in the ongoing debate about Section 230, online speech, and the future of the Internet.

Why Section 230 Matters

While the guests on Ellysse and Ashley Break the Internet expressed a range of opinions on whether (and how) to amend Section 230, all agreed on one thing: Policymakers should not fully repeal the law. It plays a pivotal role in our digital world, and removing it entirely would have harmful, widespread consequences.

Many of the experts who we spoke to outlined Section 230’s valuable contribution to shaping the Internet as we know it today. At its most fundamental level, Section 230 enables online services that rely on user-generated content to facilitate online speech and expression on an unprecedented scale. In our conversation with Cathy Gellis, a veteran Internet professional-turned-lawyer working at the intersection of technology and civil liberties, she called these platforms “helpers”: services that ensure whatever a user wants to send or put on the Internet is shared with other users. “In order to make sure that those helpers can exist,” she argued, “we don’t make them liable for the help they gave to people to say things.” Because of Section 230 protections, these intermediary “helpers” can strengthen, rather than undermine, users’ free speech rights.

Section 230 emerged in the early years of the Internet as a solution to what is called the “Moderator’s Dilemma.” Patrick Carome, an attorney who is well known for his defense of online services in Section 230 cases, explained to us that this dilemma arose from legal cases involving the online services Prodigy and CompuServe in the 1990s, which effectively set a precedent that “if you engage in self-policing of your site, you actually are more exposed and directly responsible for third-party content.” This, Carome said, “was perceived as sending exactly the wrong message.” So, Congress introduced Section 230 to encourage responsible content moderation. The intermediary liability protections afforded by Section 230 are even more important today. Neil Chilson, a former chief technologist for the Federal Trade Commission who is now a senior research fellow at the Charles Koch Institute, noted, “Twitter deals with a volume of content every second that probably dwarfed what Prodigy had in a day.”

In fact, as many of our guests pointed out, Section 230 has enabled platforms to grow and develop responses to the challenges of moderating content at this scale. This is not to say platforms have found the silver bullet solution for moderating content perfectly—realistically, they probably never will. That’s because there are a lot of variables that go into content moderation, from the size of platforms’ user bases and staff capacity, as regulation expert Daphne Keller pointed out, to the different social, legal, and political contexts in which they operate. David Kaye, a former UN special rapporteur on the right to freedom of opinion and expression, noted that companies must observe national laws wherever they offer their services, “and that’s going to be pretty complicated for companies that operate in up to well over 50 or 150 countries.”

Finally, Section 230 is important not just for platforms themselves, but also for competition and innovation in the digital economy. Several guests argued that intermediary liability protections foster competition by reducing risks and costs for smaller platforms. Reddit’s policy director Jessica Ashooh was clear on this point: The company simply could not exist in its current form without intermediary liability protections.

Misconceptions About Section 230 and Intermediary Liability Protections

While Section 230 may not be a perfect solution, some guests noted that many critiques of the law stem from misunderstandings of how it works, what it covers, and the broader legal environment in which intermediaries must operate. Perhaps the most oft-cited misconception was that Section 230 is only beneficial to “big tech” platforms like Facebook or Google, when in fact it protects platforms of all sizes that provide an equally wide range of online services. These benefits extend to users: “Every Internet user benefits from Section 230,” Electronic Frontier Foundation staff attorney Aaron Mackey explained, both in terms of the services they are able to use and in liability protections for third-party content they re-share on these platforms.

There also seems to be confusion about the relationship between Section 230 and the First Amendment. “When we’re talking about some of thecontroversies that have occurred over recent content moderation decisions, they really are probably protected under the platforms’ First Amendment rights,” noted Jennifer Huddleston, director of technology and innovation policy at the American Action Forum. Section 230 expands the First Amendment rights of users and intermediaries alike by enabling online speech. If platforms had to engage in costly litigation to uphold these rights, Gellis explained, then they would actually be diminished.

Another misconception that our guests highlighted was the belief that Section 230 protections require, or should require, political neutrality. Huddleston pointed out that “both of the authors of Section 230 … have been very clear that they never intended neutrality to be involved in this.” Neutrality provisions are also a First Amendment issue. “Political speech is at the heart of free speech,” Chilson said, so reform proposals that would obligate platforms to moderate political speech in a certain way raise constitutional issues.

Underlying these misunderstandings is the fundamental challenge of the Section 230 debate: Policymakers can’t seem to agree on what should and shouldn’t be allowed online. Several guests identified this dilemma: “Many on the political left … are concerned about a lack of moderation, whereas many on the political right are concerned about a perception of over-moderation,” explained Klon Kitchen, a senior fellow at the American Enterprise Institute. This disagreement about the nature of the problem makes finding a solution particularly challenging.

It is in some ways unsurprising that the disagreement persists, given the massive challenge of moderating content at scale. There is bound to be confusion not only about what content is acceptable, but also about what falls within legal bounds. Some content—such as child sexual abuse imagery (CSAM)—can be easily recognized as illegal. But with other content, such as defamation or violent content, the bounds of what is legal or acceptable is much more subjective. “Any time you’re asking someone to remove subjective content, that creates issues,” said Andrew Bolson, a privacy lawyer who has advocated for Section 230 reforms. The same is true for content that would generally be considered legal speech: As Huddleston noted, not all policymakers agree where the line should be when it comes to harmful content.

Challenges for Online Content and Arguments for Reform

Whether you believe Section 230 should be repealed, amended, or left untouched, it’s clear that the Internet and digital communications have changed significantly in the 25 years since its introduction. Many complex challenges have arisen that few would have predicted in 1996—and to some, these challenges indicate a need for broader reform. For example, David Chavern, CEO of the News Media Alliance, argued that the content itself is not the problem, but rather the way platforms have decided to amplify certain content through algorithms. Chavern and other advocates for reform have called for algorithmic transparency and greater accountability.

There is also much more harmful and illegal content online now. To address this, Kitchen has proposed a “Bad Samaritan” carveout to strip protections from intermediaries that knowingly facilitate illegal content or activities. Practices such as nonconsensual pornography (NCP, also known as “revenge porn”), where an individual’s intimate photos or videos are distributed online without their knowledge or consent, have also become more widespread. Elisa D’Amico, who leads a pro bono legal project representing victims of NCP, told us that it’s often difficult to get this content removed because it is virtually impossible to verify consent without context, and because the content can spread rapidly to all corners of the Internet.

The underlying question is: Has the law been able to keep up with the rapid pace of innovation in the way we interact online, and with the concurrent proliferation of harmful content? Many of our expert guests believed so, but some argued the law is no longer sufficient.

Consequences of Reform and the Future of Section 230

Over the course of 12 episodes, experts on all sides of the debate gave us a lot to think about when it comes to whether or how to amend intermediary liability protections. It is clear there would be significant consequences if the law were to undergo drastic change. For one thing, there would be a noticeable impact on online speech. Keller explained that without Section 230, some platforms would over-moderate and censor content, while others would take a “hands-off approach” and cease moderating entirely—leaving alone plenty of undesirable-but-legal content that most people wouldn’t want to see.

Many of our guests also argued that even incremental carveouts would have a chilling effect on speech. “We should be equally cautious about this kind of piecemeal approach as calls for broad changes to Section 230,” cautioned Huddleston, because “if you have enough of those little carveouts, then you’ve undercut the principle of the law in a way that is almost even more damaging.” Huddleston and others pointed to FOSTA-SESTA, which imposed more stringent liability for sex trafficking, as an example of how even carveouts that seem straightforward can backfire. Reddit’s Ashooh explained that FOSTA-SESTA also led to over-moderation of related but beneficial content, such as support services and resources for sex workers.

Limits to intermediary liability protections could also have adverse effects on competition and further innovation, both in terms of the services themselves and in their approaches to tackling harmful content. “We have to think really hard about what the impact of the change in how liability works for these types of companies would affect their ability to come up with new services, to come up with things that make consumers better off,” Chilson cautioned. For one thing, greater liability means higher overall costs. It also means greater risk—which makes new platforms less attractive to investors.

Certainly, there are constitutional considerations in play as well. “I think some of the proposals [for reform] … come at it from the perspective of not actually even having ground in basic First Amendment principles,” said Mackey. The free speech aspect of intermediary liability protections is important for users in the United States, but as Kaye explained it also sets a baseline for speech-protecting approaches to content in other countries.

In looking to the future of Section 230, the stakes are undeniably high. Policymakers will have to address concerns about incentivizing over- or under-moderation, protecting users’ and platforms’ constitutional rights, and reconciling contrasting policy objectives.

There are many other voices in this debate beyond the experts we interviewed. For further resources, see the show notes for individual episodes at itif.org/230pod.

Subscribe: 

Back to Top