Skip to content
ITIF Logo
ITIF Search
Section 230 Should Not Be a Political Weapon

Section 230 Should Not Be a Political Weapon

January 27, 2026

In a striking turn, Sen. Rand Paul (R-KY), a long-time defender of free speech, reversed his opinion on Section 230—the law that underpins today’s Internet. Writing in a New York Post op-ed on January 19, 2026, he argued that Big Tech companies like Google “can’t be trusted to do the right thing” when it comes to content moderation. While frustrations with social media platforms’ content moderation practices exist on both sides of the aisle, threatening to revoke Section 230 protections as a political weapon to punish “Big Tech” platforms for individual political grievances will undermine online free speech and make the Internet a worse place for consumers.

Section 230 of the Communications Decency Act protects online services and their users from facing legal liability for third-party content. The law’s protections do not extend to criminal liability. When platforms fail to follow the law regarding illegal content and activity like child sexual abuse material or nonconsensual intimate imagery, they may still face legal consequences.

In Sen. Paul’s case, however, the content in question—YouTube videos accusing him of taking money from former Venezuelan President Nicolás Maduro—is not criminal in nature. Sen. Paul could sue whoever created the content for defamation under civil law, but Section 230 says platforms like YouTube are not liable. In other words, Section 230 establishes that responsibility for content rests with the ones who created it.

These protections benefit more than just Big Tech. Any online service that hosts user-generated content relies on Section 230, including not just large (and small) social media platforms, but also knowledge-sharing websites like Wikipedia, review-sharing platforms like Yelp, online marketplaces like Etsy, dating apps like Bumble, and any online service with a comment section, including newspapers. Users also benefit from Section 230, as it also protects individuals from facing liability for other users’ speech, such as when they forward an email or repost something on social media.

Threats to revoke or weaken Section 230 in response to specific grievances turn a generally applicable and widely beneficial legal framework into a tool of retaliation—one that invites future lawmakers to pressure platforms to carry or suppress objectionable speech.

In order to address Sen. Paul’s concerns over potentially defamatory online content, platforms would need to act as arbiters of truth, making judgment calls as to which statements or theories hold merit and should remain online and which do not—an approach Sen. Paul expresses similar concern with in the same article, criticizing YouTube for removing his own content warning against using cloth masks to protect against COVID-19 transmission. Requiring platforms to make these judgment calls and holding them liable when they make an incorrect decision would chill online discourse on important social and political topics and incentivize over-removal of controversial but protected speech.

Plenty of content online is either blatantly untrue or not sufficiently evidence-based. Social media, and the Internet more broadly, were not designed as a forum for sharing exclusively factually correct information. Users share opinions, theories, and yes, even lies. The same is true of plenty of speech on the radio, on TV, in everyday conversation, and even on the Senate floor. It is unrealistic and, in the long run, undesirable to expect otherwise, especially social media platforms designed for everyday use by everyday individuals.

None of this invalidates Sen. Paul’s—and other lawmakers’—concerns over social media platforms’ content moderation decisions. Accusations of major platforms displaying partisan bias in their content moderation decisions, censoring controversial political speech, enabling the spread of mis- and disinformation are serious allegations that researchers have and should continue to evaluate. However, there are alternative solutions to these grievances outside of turning Section 230 into a political weapon.

Change should start with legislation setting transparency requirements for social media platforms’ content moderation decisions. Congress should require these platforms to clearly describe what content and behavior is allowed and not allowed, how they enforce these rules, and how users can appeal moderation decisions. Additionally, the law should require platforms to enforce their rules consistently and create an appeals process for content moderation decisions wherever one does not already exist.

To increase transparency surrounding content moderation, platforms should release publicly accessible annual reports, including data on how many and what types of rule-breaking content and behavior a company removed from the platform, how many of those decisions were appealed by users, and how many of those appeals were successful. Some platforms already engage in all or some of these practices voluntarily.

This approach would give lawmakers and researchers greater insight into the so-called “black box” of content moderation on social media and solve problems arising from inconsistent or biased content moderation while maintaining Section 230’s liability protections and preserving free speech online. Lawmakers have legitimate concerns regarding social media content moderation, but revoking Section 230 protections would worsen these concerns, not solve them. Section 230 allows for a diverse Internet ecosystem, allowing online services to moderate content in a way that best suits their needs and the needs of their users, and threats to Section 230 are threats to the Internet and its users.

Back to Top