ITIF Search
Sunsetting Section 230 Will Make the Internet Worse for Everyone

Sunsetting Section 230 Will Make the Internet Worse for Everyone

May 15, 2024

The controversial and often misunderstood law that states that online services are not liable for the content their users post, Section 230 of the Communications Decency Act, is back in the limelight. The law has survived numerous attempts to weaken its liability shield, criticism from both Democratic and Republican lawmakers and three current and former presidents, a Supreme Court case, and years of debate. But draft legislation introduced by Reps. Cathy McMorris Rodgers (R-WA) and Frank Pallone (D-NJ) on May 12, 2024 would sunset Section 230, giving Congress and stakeholders just 18 months to come up with a new legal framework for online intermediary liability.

Policymakers have blamed Section 230 for any number of online problems. While criticism has come from both sides of the aisle, Democrats tend to blame the law for online services—particularly large social media companies—not removing enough potentially harmful or objectionable content, such as hate speech, harassment, and misinformation and disinformation. Meanwhile, Republicans have accused the same companies of removing too much content, censoring conservative opinions, and de-platforming conservative users. They are united in their opposition to Section 230.

Other stakeholders would use Section 230’s removal to achieve their own ends, for example, as a punishment for large social media companies real or alleged mistakes, as a means to break up “Big Tech” companies that benefit from Section 230’s legal shield, as a way to get children off of social media, or as a way to get rich by suing deep-pocketed “Big Tech.” While these stakeholders may end up getting their way if Congress sunsets Section 230, everyday users, businesses, and the digital economy would suffer.

Using Section 230 as a means to punish “Big Tech” is a flawed premise from the start because large tech companies are not the only ones who benefit from Section 230’s liability shield. Any online service that hosts user-generated content relies on Section 230, including not just large (and small) social media platforms, but also knowledge-sharing websites like Wikipedia, review-sharing platforms like Yelp, online marketplaces like Etsy, dating apps like Bumble, and any online service with a comment section, including newspapers. Users also benefit from Section 230, as it also protects individuals from facing liability for other users’ speech, such as when they forward an email or repost something on social media.

Furthermore, the diversity of speech online—including speech that some may find objectionable—is not a result of Section 230, but of the First Amendment. Even without Section 230, individuals have the right to share their opinions and express themselves, as long as they do not violate the law by doing so, while online services the right to choose what types of speech they want to associate themselves with and disassociate from speech that does not reflect their values. Online services do this by coming up with community guidelines that users must adhere to as a condition of using the service.

The main result of sunsetting Section 230 would be an influx of frivolous lawsuits against online services that host user-generated content. Because of the First Amendment, online services would likely win most of these lawsuits, but not without going through the expensive litigation process. This would take resources away from innovation, hindering companies’ ability to experiment with and offer new services and safety features. Online services with fewer resources go out of business entirely.

Online services that survive the flood of litigation would have a few options to either recoup or avoid these high costs. First, they could start charging users more or charge for services that were previously free, to the detriment of all users but particularly those from low-income households. Alternatively, online services that host a large amount of user-generated content could implement stricter guidelines that censor controversial forms of speech, restricting the freedom of expression online.

Ostensibly, this would serve Democrats’ purpose of getting online services to remove more objectionable content. In reality, takedowns of controversial speech would have an outsized impact on marginalized populations like the LGBTQ community, as well as social movements like Black Lives Matter and #MeToo. Takedowns would also likely affect conservative content, contributing to Republicans’ complaints of censorship.

Congress has been debating Section 230 for years, always coming up against the same obstacle: different people disagree about what the Internet should look like. Reps. McMorris Rodgers and Pallone argue that this delay is instead the result of “Big Tech’s refusal to engage in a meaningful way” and that sunsetting Section 230 is necessary to put the pressure on companies to compromise with Congress. But no amount of discussion with large tech companies will resolve the irreconcilable differences between Republicans and Democrats on how online services should handle controversial online speech. Furthermore, Congress has given “Big Tech” little opportunity to negotiate after years of animosity toward these companies and suspicion over their every action.

In its current form, Section 230 allows for a diverse Internet ecosystem, allowing online services to moderate content in a way that best suits their needs and the needs of their users. It is highly unlikely that, in 18 months, Congress will come up with a better solution to online intermediary liability protection than the one America has now, which enabled the Internet to grow and flourish and the United States to become a global leader in the digital economy. The more likely result is that online services will lose vital liability protections, making the Internet a worse place for the average user and creating obstacles to innovation.

Back to Top