ITIF Logo
ITIF Search

Fact-Checking the Critiques of Section 230: What Are the Real Problems?

February 22, 2021

Section 230 of the Communications Decency Act has become a key battleground in the larger debate over free speech and content moderation. There are legitimate and illegitimate critiques about it—but they don’t negate the law’s many benefits.

KEY TAKEAWAYS

Section 230’s liability protections were intended to be broad. But they are not limitless. Courts continue to identify exceptions to the liability shield.
Contrary to critics’ claims, Section 230 is not a gift for Big Tech. Many different types of organizations—large and small, tech and non-tech, companies and individuals—benefit from Section 230 protections.
The First Amendment, not Section 230, gives online services the right to remove content they find objectionable—and it protects individuals from government censorship, not from removal by online platforms.
Most online services that benefit from Section 230 are legitimate, but since some bad actors take advantage of the law it makes sense to consider ways to reduce these harms without overburdening online services.
Limiting or removing Section 230 protections would be harmful to innovation, free speech, and competition, so policymakers should carefully consider the consequences of any proposed reforms.

Introduction

1. Section 230’s Liability Protections Are Too Broad

2. Section 230 Needs a “Good Faith” Requirement

3. Section 230 Prevents Victims of Crime From Suing Enabling Platforms

4. Section 230 is a “Gift” to Big Tech Companies

5. Section 230 Treats the Tech Industry Differently From Other Sectors

6. Section 230 Gives Platforms Unrestricted “Freedom of Reach”

7. Section 230 Hinders State Law Enforcement

8. Section 230 Allows Platforms to Be Politically Biased

9. Section 230 is Detrimental to Equal Protection

10. Section 230 Undermines the Adversarial Legal System

Conclusion

Endnotes

Introduction

In recent years, there has been much debate on Section 230 of the Communications Decency Act. Supporters maintain that the law shaped the Internet into what it is today: a vast, diverse network for communication, entertainment, and commerce that has revolutionized every industry and created a more connected and productive society. They say that Section 230 strikes the right balance by protecting online services and their users from liability for the illegal speech of others, while also ensuring they are not liable for the steps they take to moderate third-party content. The impact of the law is that it has enabled a wide range of innovative online services and business models that rely on third-party content, including social media sites that have given people new ways to connect, knowledge-sharing websites that have changed the way people access information, and product and business review sites that have changed the way people shop. All of these online services and business models rely on Section 230’s liability protections to continue providing their services, often for free, to the benefit of consumers.

On the other hand, Section 230’s opponents believe this law is the root cause of many of the problems with the Internet, including harassment, hate speech, disinformation, violent content, child sexual abuse material, nonconsensual pornography, and alleged political bias on social media. They believe the law provides online services too much of a shield from accountability for how they may cause harm, either directly or indirectly, to others. Moreover, they believe the law is overly broad, allowing bad actors to hide behind its liability shield and preventing harmed users from holding these platforms accountable. And while they acknowledge that Section 230 has shaped the modern Internet, they do not necessarily believe this a good thing: With disinformation and hate speech running rampant on social media and allegations of political bias on major social media platforms, critics argue that it is clear fundamental changes to the rules that govern the Internet are in order.

While some critics’ arguments stem from fundamental misconceptions about Section 230, most are based in legitimate concerns, and it is perfectly reasonable to consider whether a law Congress passed in the days of dial-up Internet and online bulletin boards can be improved to address new concerns in the age of mobile apps and social media. At the same time, these critiques do not negate the benefits this law has provided. Because any changes to Section 230 would inevitably carry far-reaching implications for the Internet and the many aspects of users’ lives that now take place online, it is crucial to understand these arguments’ origins, strengths, and weaknesses in order to determine the best way forward for online intermediary liability.

There are 10 main critiques against Section 230:

  1. Section 230’s liability protections are too broad. (Mostly illegitimate. Courts continue to identify exceptions to Section 230’s liability shield.)
  2. Section 230 needs a “good faith” requirement. (Somewhat legitimate. However, such a requirement could place an unreasonable burden on online services.)
  3. Section 230 prevents victims of crime from suing enabling platforms. (Mostly illegitimate. If online services commit or facilitate crime, federal law enforcement can go after them.)
  4. Section 230 is a “gift” to Big Tech. (Illegitimate. Many different types of organizations, including non-tech ones, plus individuals benefit from Section 230 protections.)
  5. Section 230 treats the tech industry differently from other sectors. (Mostly illegitimate.)
  6. Section 230 gives platforms unrestricted “freedom of reach.” (Illegitimate. Holding platforms accountable for amplifying content would be impractical and bad for users.)
  7. Section 230 hinders state law enforcement. (Legitimate. However, the solution is to strengthen federal criminal law.)
  8. Section 230 allows platforms to be politically biased. (Illegitimate. The First Amendment allows this, not Section 230.)
  9. Section 230 is detrimental to equal protection. (Mostly illegitimate. Limiting or removing Section 230 protections would be harmful to marginalized speech.)
  10. Section 230 undermines the adversarial legal system. (Illegitimate. Eliminating Section 230 would make it more expensive to dismiss frivolous lawsuits and undermine start-ups and competition.)

Each of these critiques is considered in detail below.

1. Section 230’s Liability Protections Are Too Broad

A common argument against Section 230 contends that, though the law itself may have been well intended, the courts’ interpretation of the law is overbroad. Opponents cite various court cases that extended Section 230’s liability shield—especially the protections outlined in Section 230(c)(1), which protects online services from liability for third-party content on their platforms—beyond what they believe to be reasonable. These critiques take several forms.

Some critics argue that the courts erred when they found that Section 230 provides both publisher liability and distributor liability. These opponents trace the problem all the way back to Zeran v. America Online (1997), one of the earliest Section 230 cases, in which the court chose to interpret Section 230 broadly, finding that Section 230 protects online services from liability for distributing third-party content even if they have knowledge that the content is illegal.[1] The court reasoned that “interpreting [Section 230] to leave distributor liability in effect would defeat the two primary purposes of the statute” because they would face potential liability any time any party alleged that certain online content is illegal.[2] Most courts have followed this precedent in the decades since.[3]

Some critics argue that any modification of third-party content should negate the liability immunity in Section 230. These critics may agree with the Zeran opinion but disagree with the court’s opinion in another early Section 230 case, Batzel v. Smith (2003), which ruled that the law protects websites and online platforms that repost and edit third-party content.[4]

Others believe that Section 230 should be strictly limited in scope to addressing liability for defamatory third-party content. They argue that the court cases that have applied Section 230 to other situations that have nothing to do with freedom of speech, such as protecting online marketplaces when third-party sellers offer defective products or protecting social media platforms and messaging services when criminals use their services to communicate, have gone too far.

However, these three critiques do not hold up to scrutiny. First, the law’s authors, Sen. Ron Wyden (D-OR) and former Rep. Chris Cox (R-CA), have said as recently as 2019 and 2020, respectively, that the courts’ interpretation of Section 230 does not go against their original intentions, and that Section 230’s liability shield was always meant to apply broadly.[5] It’s true that the two cases that motivated Congress to pass Section 230—Cubby v. CompuServe (1991) and Stratton Oakmont v. Prodigy (1995)—along with Zeran, the case that set the precedent for how future courts would interpret the law, were all defamation cases.[6] But Congress never intended to limit Section 230’s liability protections to defamation.[7] It is also true that, at least in Section 230’s early days, most courts interpreted the law broadly. For the first 12 years after its passage, courts usually took a defense-favorable position.[8] But in more recent years, courts have begun to identify modest limits to the Section 230 liability shield.

This process began with Fair Housing Council of San Fernando Valley v. Roommates.com (2008), in which the court ruled that Section 230(c)(1) does not apply if the defendant induced the illegal content in question.[9] Subsequent cases identified additional exceptions to Section 230(c)(1): It does not apply if the defendant encouraged the development of the illegal content, or if the plaintiff’s claim does not arise from the defendant’s publishing or content moderation decisions.[10] These exceptions give courts some flexibility to limit the scope of Section 230’s liability shield in certain cases, while still preserving its broad application.

2. Section 230 Needs a “Good Faith” Requirement

Critics also argue that because Section 230(c)(1)—the part of the law that provides the liability shield—does not contain a “good faith” requirement, this provision applies to all online services, including bad actors. Courts have extended Section 230’s liability shield to websites even when they profit from illegal content or activity, as long as they are not aiding and abetting an illegal activity. The most prominent example of this is Backpage, a website that hosted classified ads, including ads for escort services. In Doe v. Backpage (2016), the court dismissed a civil lawsuit against Backpage, even though it allegedly designed its website to protect sex traffickers—for example, with standards that allowed users posting classified ads to maintain a high degree of anonymity—because Section 230(c)(1)’s liability protections apply to an online service’s decisions regarding how it structures its websites or treats postings.[11] Notably, Backpage did not require users to post unlawful content or provide information that would lead to illegal activity. In contrast, in Roommates.com, the court held the website liable for inducing illegal content because it required individuals to disclose information that would enable housing discrimination.

Another case, Jones v. Dirty World (2014), applied Section 230(c)(1) to a gossip website that allegedly encouraged users to submit defamatory content. The website included a content submission form for users to submit “dirt.” The website’s manager and staff would review these submissions, select certain posts for publication, and add a short comment. But because the staff did not “materially change, create, or modify any part of” the submissions, Section 230(c)(1) shielded them from liability for content within the submissions. Additionally, the court decided that the fact that the website was designed to host user-submitted gossip was not enough to hold the website’s operators liable for inducing defamatory content; unlike Roommates.com, Dirty World did not require users to post illegal content or provide information that would lead to illegal activity.[12] Section 230 critics point to a range of harms, including nonconsensual pornography, harassment, and terrorist communications, that bad actors may allow third parties to distribute on their platforms without facing liability.

Although these intermediaries may not be actively committing a crime, they are “promoting, facilitating, and profiting from it,” says law professor Mary Anne Franks.[13] But instead of holding them responsible to victims in civil court, critics contend, Section 230 gives them immunity. They claim this is counter to Congress’s intentions, which were to incentivize “good faith” content moderation, and instead has given online platforms an excuse not to engage in content moderation: If no one can hold them liable for harmful or even illegal content hosted on their platforms, what incentive do they have to remove it? Moreover, critics say that harmful behavior is not limited to a few bad actors, and claim that many large, mainstream platforms such as social media sites allow harmful content on their platforms, such as hate speech, misinformation, and harassment. These critics are skeptical that market forces are enough to incentivize these actors to keep their platforms largely free of this content.

While some bad actors may benefit from Section 230 protections, the vast majority of the beneficiaries are legitimate, reputable sites and services that do not take advantage of the law.[14] These companies have powerful economic incentives for keeping harmful or illegal content off their platforms.[15] The first is to protect their brand and reputation, exemplified by the recent “techlash,” or backlash against major tech companies that arose from widespread disinformation on social media surrounding the 2016 U.S. elections.[16] This negative attention chases users away from companies’ platforms and motivates lawmakers to consider policies that would be detrimental to companies’ business models. The second is advertising revenue. Advertisers do not want their products and services promoted next to harmful or illegal content. If platforms gain a reputation for hosting this content, they risk losing advertiser revenue. And a third incentive comes from consumers, most of whom do not want to use online services that are full of harmful or illegal content.

Outside these mainstream platforms, there are some bad actors that design their platforms to amplify and profit from harmful or illegal content—such as revenge porn websites, websites such as Backpage that protected sex traffickers, or websites such as Dirty World that solicit defamatory statements from commenters—for whom market incentives have little effect. Bad actors can and do still end up facing civil and criminal penalties for violating other laws, as was the case when the Federal Trade Commission (FTC) shut down the revenge porn site MyEx and fined its operators $2 million in 2018.[17] Law enforcement can also take action against websites, as Section 230 does not shield online services from federal criminal liability. In the case of Backpage, Section 230 did not prevent the Department of Justice from seizing the website in 2018, before Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act (FOSTA-SESTA).[18] A response to Backpage, the law carved out an exception for sex trafficking in Section 230 so that online services could be liable in civil as well as criminal court for violating state sex trafficking laws (the companies could already be held criminally liable for federal sex trafficking laws).

In addition, according to the Fair Housing Council of San Fernando Valley v. Roommates.com (2008) and FTC v. Accusearch (2009) court decisions, these websites and platforms that induce or encourage the development of illegal content are exempt from Section 230’s liability protections.[19] But not all courts have followed this precedent, meaning some bad actors have avoided liability. The danger of adding a “good faith” requirement to Section 230(c)(1) is that it may subject online services to more expensive litigation than they already face. Currently, companies can have cases against them dismissed that would hold them liable for third-party content using Section 230(c)(1). If each time a plaintiff sued an online service for failing to remove harmful or illegal third-party content that service had to prove that it acted in good faith, the litigation process would be much longer and more expensive. While large companies may be able to absorb these expenses, smaller, emerging platforms—or nonprofits such as the Wikimedia Foundation—would not. To avoid this, a good faith requirement would need to place the burden of proof on the plaintiff, rather than the defendant, or there would need to be a certain set of requirements that defendants could easily prove they had met in order to have cases against them dismissed.

3. Section 230 Prevents Victims of Crime From Suing Enabling Platforms

Law enforcement agencies and victim advocacy organizations have also joined the debate over Section 230. One of their arguments is similar to the “good faith” argument: Section 230 protects websites and online platforms that host illegal user content, either knowingly or negligently, from liability. For example, online services are not liable when terrorists use their platform, as exemplified by Fields v. Twitter (2016). The case arose when a member of ISIS killed two Americans traveling abroad in Jordan. The victims’ wives and children sued Twitter for enabling ISIS to communicate and recruit new members on the platform. The court dismissed this case, citing Section 230 and the lack of connection between the perpetrator of the attack and Twitter, stating, “There are no facts indicating that [the attack] was in any way impacted, helped by, or the result of ISIS’s presence on the social network.”[20]

Section 230 immunity does not apply to federal criminal law. The government can still prosecute online services that engage in illegal activity. But, as in the Fields case, Section 230 prevents victims of crime and terrorism or their families from suing online services in civil court for damages when a criminal or terrorist uses their website or platform for harm. According to Attorney General William Barr, who has been an outspoken critic of Section 230, “Federal criminal prosecution is a powerful, but necessarily limited tool that addresses only the most serious conduct. The threat of civil liability, however, can create industry-wide pressure and incentives to promote safer environments.”[21] Opponents of Section 230 argue that because the perpetrator of illegal activity online is often anonymous, the only legal recourse for victims is to bring action against the websites and online platforms that hosted the activity, but because Section 230 cuts off these avenues, it leaves victims without legal recourse.

Section 230’s supporters call this line of reasoning into question. Nothing in the law prevents victims of crime and terrorism from taking civil action against their attackers. It only prevents them from taking civil action against the websites and online platforms their attackers used to commit acts of crime or terrorism. If an online service plays any part in the development of illegal content, or induces illegal behavior, it cannot claim Section 230 protection. The online services the section does protect are the websites and platforms that act as passive intermediaries: social media platforms on which criminals and terrorists recruit, online messaging services on which criminals and terrorists communicate, and websites on which criminals and terrorists post illegal content. In other words, Section 230 “holds individuals responsible for their actions online, not the tools that they use.”[22]

In practice, this means that if an online service engages in illegal activity—such as knowingly aiding and abetting terrorists—federal law enforcement can take action against the service. But online services are not legally responsible for criminals misusing their platform. Some countries’ intermediary liability laws include a provision that holds online services responsible if they were aware or reasonably should have been aware of criminal conduct on their platforms, but once again the danger of such a provision is that it could make it more difficult for online services to dismiss frivolous lawsuits, dramatically raising the costs of operating an online service that relies on third-party content, or motivate online services to engage in less monitoring to reduce their potential liability.

There are also significant risks involved in requiring online services to monitor for illegal activity on their platforms, not only because such a requirement would be particularly burdensome given the sheer amount of content uploaded to online platforms. A responsibility to monitor raises serious privacy concerns as online messaging services, email providers, and video conferencing providers would have to monitor their users’ communications or risk liability if any of their potentially millions of users were engaged in illegal activity on their platform. This would be especially problematic for services that provide end-to-end encryption—a form of encryption that allows only the sender and recipient, and not the provider, to view the message—as there is no way to monitor these communications without compromising privacy and security.

4. Section 230 is a “Gift” to Big Tech Companies

Critics on both sides of the aisle sometimes argue that Section 230 is a “gift” to Big Tech companies.[23] They argue that the law basically amounts to “corporate welfare” for such companies as Facebook and Google, and therefore the law should be rescinded because there is no need for the government to protect these successful tech companies, especially if they are engaging in harmful activity (i.e., insufficiently moderating harmful content, according to some on the left; and silencing conversative voices, according to some on the right.)

However, Section 230 offers benefits to many different types of organizations, not just tech giants. First, tech companies of all sizes benefit from the law, including businesses such as Airbnb, GitHub, Kickstarter, Medium, Meetup, Reddit, and Yelp, along with nonprofits such as the Wikimedia Foundation, which runs Wikipedia. Second, the law benefits many non-tech businesses, from retailers that allow user reviews to news sites that allow user comments.

Finally, the law benefits users themselves. Indeed, as noted earlier, the law explicitly offers protections  that make sure users are not held responsible for content produced by other people. For example, Section 230 ensures individual users are not liable for posts commenters might make on their personal blogs. Similarly, Section 230 protects them from liability for forwarded emails or retweeted posts.[24]

Therefore, arguments that Section 230 should be eliminated because it only benefits Big Tech are misguided, and ignore the real benefits this law also offers to many other businesses, organizations, and consumers.

5. Section 230 Treats the Tech Industry Differently From Other Sectors

Opponents also claim that Section 230 creates an exception for the tech industry, treating online platforms differently from other businesses. Critics maintain that if a physical business facilitated sex trafficking or terrorist communication, or if a physical magazine printed user-submitted defamatory statements, they likely would not escape civil liability. Offline businesses suffer consequences for profiting from the illegal activities of others, but online companies are immune.[25]

Essentially, critics assert, the tech industry is putting out faulty products: online platforms that enable harm and abuse. Any other industry would face legal ramifications for putting out faulty products, but the tech industry can claim Section 230 protection and avoid the negative consequences of their actions and choices.[26] Critics say that companies should have designed their products in a less negligent way in order to prevent these harms and abuses from happening in the first place, but they had no incentive to do so with Section 230 in place.[27]

Critics believe the tech industry receives special treatment—and this treatment is unwarranted. David Chavern, president and CEO of a trade association representing the news industry, makes the case that traditional media has always been liable for the content it publishes, and the news industry has survived for centuries without Section 230-like legal protection.[28] Again, this raises the questions: What makes the tech industry different? Why should it be treated as an exception?

Proponents of Section 230 argue that, unlike traditional media, online platforms that rely on user-generated content are constantly inundated with thousands of posts. It is unreasonable to expect these platforms to take down every potentially objectionable post in a timely manner without making some mistakes. Critics disagree, arguing that there is no data to support the claim that it would be impossible for tech companies to effectively monitor and moderate content on their platforms.[29] Critics say that if tech companies can profit from the content they host, they can moderate it—that is their responsibility, and if moderation proves difficult, that is their problem.[30]

Law professor Eric Goldman pushes back on some of this reasoning. He argues that critics have set an unreasonable standard for tech companies when it comes to removing harmful or illegal content. Nobody should expect online companies to eliminate all harms, because nobody expects that in the offline world.[31] In addressing the fact that the tech industry appears to be an exception when it comes to intermediary liability, policymakers should not go too far in the other direction.

And finally, to answer the questions of what makes the tech industry different and why policymakers should treat it differently, Section 230’s supporters point to the law itself. Section 230(a) outlines five ways the online world is exceptional: (1) It provides users with greater access to educational and informational resources than ever before, (2) it grants users an unprecedented level of control over the information they receive, (3) it creates a diverse forum for politics and culture, (4) it grew and flourished without government regulation, and (5) it has increasingly become a central pillar of modern life.[32] These five points still apply and justify the section’s continued importance. Section 230 lets individual users act as publishers, which has enabled a wide variety of innovative communication services that rely on user-generated content.

It is not the tech industry that differs from traditional print publications, it is the process of moderating vast amounts of third-party content. As an example, Twitter users post more than 500 million tweets per day, approximately 6,000 tweets per second.[33] Twitter cannot review all of these tweets before they are posted, so it relies on a combination of human moderators, algorithms, and community reporting to evaluate and remove content that violates its Terms of Service. This volume of content is impractical, if not impossible, to substantively review. If similar resources were necessary for other types of publishers, those services would need similar protection. Indeed, courts have also held that local television stations should not face liability for redistributing third-party content.[34]

6. Section 230 Gives Platforms Unrestricted “Freedom of Reach”

Some argue that the problem with Section 230 is that it gives online platforms immunity even when they have an active role in amplifying the reach of third-party content. These critics maintain that online services would take more action to address harmful third-party content if they were liable for promoting this content. While this critique is not new, it garnered media attention in 2019 when actor and satirical comedian Sacha Baron Cohen lambasted Section 230 in his keynote address at the Anti-Defamation League’s annual Never Is Now Summit on Anti-Semitism and Hate, in which he accused large tech companies such as Facebook, Google, YouTube, and Twitter of creating “the greatest propaganda machine in history.”[35]

Cohen followed his speech up with an opinion editorial in The Washington Post explaining his views, writing that “freedom of speech is not freedom of reach” and arguing that online platforms do not deserve protection when they amplify harmful views to millions of users.[36] David Chavern, the head of the news industry’s trade association, similarly argues that online platforms are not simply passive intermediaries for user-generated content, but rather their entire business model depends on algorithms that curate and promote content, making decisions about who gets to see what. Chavern believes platforms should take responsibility for these decisions.[37] As Senator Mark Warner (D-VA) puts it, Congress needs to rethink Section 230, not in order to restrict speech, but to restrict the ability to amplify speech. He asks, “If you want to say something crazy, you have the right to say that, but do you really have the right to say something crazy and totally wrong and then have it amplified a billion times without any restrictions?”[38]

Using social media, anyone can promote false or harmful ideas to potentially billions of other users. But simply posting something online is no guarantee that others will see it. And even if people see it, they can choose to ignore content they find objectionable, block those who are posting this content, or even stop using that particular social media platform. Nonetheless, some users do post content that has caused real harm in the form of disinformation campaigns, harassment, hate speech, cyberbullying, and other forms of online abuse.

All major social media sites have policies prohibiting various types of legal, but potentially harmful, content. These sites also take steps to balance free speech on their platforms while also minimizing the spread of potentially harmful content, not only by taking it down and banning users who share it, but also by reducing its visibility, making it harder for users to share, or labeling it to provide additional context to users. But the challenge of course is that there are many different views on what content social media platforms should restrict and whether any particular post violates these rules.

The First Amendment, not Section 230, gives social media platforms the ability to decide what content to allow on their services and how to display that content. Moreover, there is an incentive for online services to allow users to report harmful content, remove this content, and ban users that repeatedly post this content, both to avoid negative media coverage and public backlash and to appeal to advertisers that do not want their products and services displayed alongside objectionable content or on platforms with negative reputations. But making social media platforms liable for any action seen as amplifying content, such as displaying a post in a news feed, is impractical, as noted earlier, because of the volume of content they would be required to review.

Furthermore, the algorithms online services use to display content to users add immense value. For example, a search engine would be virtually useless if it simply displayed an unsorted list of all content that related to given search terms, instead of prioritizing content based on what the search engine determines is most relevant to the query. Similarly, news feeds on social media would be less useful if stories were not sorted according to what is most likely to interest them. And many social media platforms now include a feature that allows users to explore or discover new content, using algorithms that display that content depending on what the user has liked or interacted with in the past. If online services were held liable for amplifying content with their algorithms, they would have to radically alter their services. 

7. Section 230 Hinders State Law Enforcement

When Congress first passed Section 230, the law contained two exceptions: Its immunity shield did not apply to federal criminal or intellectual property law. FOSTA-SESTA added a third exception: Section 230 also does not apply to sex trafficking law. However, outside of sex trafficking, Section 230 does not contain exceptions for state criminal law. States may pass and enforce laws that are “consistent with” Section 230, but the section preempts “inconsistent” state laws.[39] This means states cannot hold Internet intermediaries civilly or criminally liable for actions that are not illegal under federal law.

Doug Peterson, attorney general of the state of Nebraska, believes Congress should change this language and create an exception for state criminal law. State criminal law and federal criminal law complement each other, he claims—just as federalism intended—and this does not work when Section 230 preempts state criminal law.[40] Revenge porn is one area in which there is a gap in legislation that prevents federal enforcement.[41] Currently, there is no federal law criminalizing nonconsensual pornography, while there are laws in 46 states, the District of Columbia, and Guam.[42] But Section 230 would preempt state laws that could hold online services liable for simply hosting revenge porn, leaving victims without recourse against intermediaries, even those such as revenge porn websites, that promote and profit from this content.

On the other hand, there is an important reason some federal laws, such as Section 230, preempt state laws. Legal consistency is especially important when it comes to online services because their services are available across state and national borders, connecting people from across the country and around the world. Section 230 prevents companies from having to contend with a patchwork of state legislation, while leaving an important exception for federal criminal law, so government can go after bad actors engaged in illegal activity. In cases such as revenge porn, wherein there is a broad agreement that this is wrong, and most states have passed legislation, the answer is not to weaken Section 230, it is to pass federal revenge porn legislation as the Information Technology and Information Foundation (ITIF) has proposed.[43]

8. Section 230 Allows Platforms to Be Politically Biased

Some argue the main problem with Section 230 is that it allows online services, particularly social media platforms, to make politically biased content moderation decisions—to not take down “objectionable” content or to take down “unobjectionable” content. When Facebook CEO Mark Zuckerberg testified at a Senate hearing on disinformation in 2018, Sen. Ted Cruz (R-TX) asked, “Does Facebook consider itself a neutral public forum?” Sen. Cruz’s question arose from a popular misconception that Section 230 only applies to “neutral public forums.” He also suggested that online platforms such as Facebook must choose between their First Amendment right to free speech and their Section 230 immunity.[44] Experts were quick to point out that this is not the case; Section 230 applies regardless of a platform’s political bias, and the First Amendment and Section 230 are not mutually exclusive.[45] “I hear constantly about how the law is about neutrality,” Sen. Ron Wyden (D-OR), one of the section’s co-authors, said in an interview. “Nowhere, nowhere, nowhere does the law say anything about that.”[46]

Though Section 230 does not require political neutrality, some argue that it should. Some Republican policymakers claim that major online platforms such as Facebook and Twitter are biased toward the political left, removing conservative content and banning users who post unpopular conservative opinions.[47] At the same time, some liberals complain that these social networks are kowtowing to conservatives by doing too little to remove what they see as politically objectionable content. Since an increasing number of Americans—20 percent of American adults, according to a 2018 survey—get their news from social media, political bias online has the potential to shape public opinion in a major way.[48]

Responding to this concern, Sen. Josh Hawley (R-MO) introduced a bill in 2019 called the Ending Support for Internet Censorship Act, which would require websites and platforms to obtain government certification that they are not politically biased in their content moderation in order to continue to benefit from Section 230’s liability protections.[49] The bill has made little progress in the Senate and has drawn criticism from both sides of the aisle because of First Amendment concerns. Sen. Wyden said that it would “turn the federal government into Speech Police” and demonstrated a belief “that lawyers and bureaucrats should tell private companies how to make clearly private business decisions.”[50] Rep. Justin Amash (R-MI) agreed, stating that the bill “empowers the one entity that should have no say over our speech to regulate and influence what we say online.”[51]

More recently, concerns that Section 230 enables tech companies to make politically biased content moderation decisions has inspired action from both Congress and the administration. President Trump’s Executive Order on Preventing Online Censorship, signed in June 2020, asked the Federal Communications Commission (FCC) to create new regulations clarifying when content moderation decisions violate Section 230’s “good faith” provision, which could apply when platforms remove political content.[52] A few months later, Sens. Roger Wicker (R-MS), Lindsey Graham (R-SC), and Marsha Blackburn (R-TN) introduced the Online Freedom and Viewpoint Diversity Act, and Sen. Graham introduced the very similar Online Content Policy Modernization Act, each of which would change the language in Section 230 to make online services liable if they remove content that is not obscene, violent, harassing, or illegal.[53] In particular, these opponents object to the language in Section 230 (c)(2) that extends liability for restricting access to content “that the provider or user considers to be … otherwise objectionable, whether or not such material is constitutionally protected.”[54]

But Section 230’s advocates maintain that repealing Section 230 would lead to less free speech online, not more. If online services were liable for third-party content on their platforms, they would have to further restrict the types of content they allow, enforcing stricter standards to avoid the possibility of users posting or sharing potentially illegal content.[55] This could lead to “collateral censorship,” a form of self-censorship that occurs “when A censors B out of fear that the government will hold A liable for the effects of B’s speech.”[56] In other words, online services would err on the side of removing more content, even if that content is permissible, rather than risk leaving up content that could land them in legal trouble. They could even ban entire categories of speech, such as political speech, or category of user, such as elected officials.

In addition, Section 230(c)(2), the provision in Section 230 that shields online services from liability for removing content “in good faith,” gives online services the freedom to moderate content in a way that best suits their users. This freedom has allowed for the development of many different types of online platforms, each experimenting with moderation policies that work best for their communities. Making online services liable for removing content would have just as many negative consequences as making them liable for failing to remove content. Specifically, online services may raise their standards for content removal and choose not to remove some potentially harmful content.

Finally, even without Section 230(c)(2), as private entities, online services have a First Amendment right to remove content they do not want on their platforms. The First Amendment protects individuals from government censorship of protected speech. It does not prevent private entities, such as social media platforms or user review sites, from limiting speech by their users.[57] This freedom allows platforms to moderate content in a way that best suits their users. If users disagree with a platform’s content moderation policies and decisions, they are free to use alternative platforms. If enough users disagree, this creates an opening for new entrants to the market, as in the wake of the 2020 presidential election when many conservative Twitter and Facebook users began using alternative platforms such as Parler and Gab.[58]

9. Section 230 is Detrimental to Equal Protection

Section 230’s critics claim the law is detrimental to equal protection. Online abuse, hate speech, and harassment disproportionately affect marginalized populations.[59] And online abuse can be relentless, chasing people off social media and causing them to shut down their websites and blogs in order to escape it. The effects of online abuse can even follow victims into the physical world and drive them to move, change their names, or engage in self-harm or suicide.[60]

One example of the real-world effects of online abuse comes from Herrick v. Grindr (2018). Matthew Herrick sued Grindr when the dating app repeatedly refused to take down fake profiles of Herrick, which his ex-boyfriend Oscar Juan Carlos Gutierrez created to harass him. Gutierrez impersonated Herrick on the dating app and shared Herrick’s personal details and location. As a result, more than 1,400 men showed up at Herrick’s home and place of work over the course of 10 months believing Herrick had communicated with them and expressed an interest in sex. The local police did not take Herrick seriously, and the only response he received from Grindr after filing 50 separate complaints was an automatically generated email.[61] Once he sued Grindr, the court dismissed the case because of Section 230.[62]

Online abuse has real, terrible, and sometimes tragic effects, especially for marginalized populations. A 2017 poll found that 23 percent of women had experienced online abuse or harassment, and of those, more than 76 percent made changes to the way they use social media as a result.[63] One of the most compelling reasons to modify Section 230 is to ensure platforms take reasonable steps that could mitigate harm caused by their platforms.

However, limiting or removing Section 230 would also have a detrimental effect on marginalized populations. If websites and online platforms start censoring potentially objectionable content in order to avoid liability, controversial speech will likely be the first to go. In a world where “controversial” is defined by the majority, the Internet would become yet another forum wherein the majority has the power to censor minority opinions.[64] At the same time, if platforms are not shielded from liability, they will do less to moderate online abuse and hate speech.

10. Section 230 Undermines the Adversarial Legal System

In court cases, defendants bring up Section 230 in the motion to dismiss stage, arguing that the judge should dismiss the case against them because Section 230 immunizes them from liability. If the judge agrees, the case goes no further. If the judge disagrees, the case continues. Lawyer Carrie Goldberg believes that this undermines the United States’ adversarial legal system. This system was designed for each party—the plaintiff and the defense—to argue their position, present evidence, and bring forth witnesses and expert testimony, after which a judge or jury determines guilt. When so many Section 230 cases are dismissed before this can happen, Goldberg argues, the system cannot function properly and plaintiffs are denied their rights.[65]

However, there are two sides to this story. The United States’ adversarial system is also very expensive, especially for defendants hit repeatedly with frivolous lawsuits. Bringing up Section 230 so early in a court case is important for companies, both large and small, because it lowers the cost of litigation.[66] Much of the conversation on issues surrounding Section 230 focuses on tech’s big companies, such as Facebook and Google, that can likely afford these higher legal costs. But there are many small tech companies that could not afford a long, drawn-out lawsuit every time a user posts something objectionable on their platform. Faced with much higher legal costs, platforms may be forced to shut down, offset their costs by charging for services they previously offered for free, or drastically limit third-party content. Any of these situations would be detrimental for users, who benefit both from the diversity of websites and platforms available to them, and from being able to access many of these online services for free.

This was one of the primary reasons then-Representatives Cox and Wyden drafted Section 230 in the first place: to protect “the little guy, the startup, the inventor, the person who is essential for a competitive marketplace.”[67] This is reflected in Section 230(b), which states, “It is the policy of the United States … to preserve the vibrant and competitive free market that presently exists for the Internet.”[68] If defendants could not use Section 230 to dismiss cases against them, the high cost of frequent litigation would consolidate power into the hands of a few big companies by running smaller companies out of business, which would be devastating for competition.

Conclusion

The controversy surrounding Section 230 is clouded with misconceptions about the law’s history, text, application, and effects, but many critiques of the law expose legitimate concerns about Section 230 and challenges that have arisen in the more than two decades since the law’s passage. Section 230’s supporters argue that the law is still essential and protects online services from unfairly facing liability for third-party content, with benefits for users in the form of widely available and often free online sites and services such as social media and e-commerce that billions of people use every day. On the other hand, critics contend that Section 230 gives online services too much freedom, allowing them to make decisions that are not in the public’s best interests, such as leaving up content that is harmful or even illegal or censoring certain political content. They argue that Section 230’s liability shield is overly broad, protecting online services that have acted in bad faith or negligently.

In order to have an informed debate about Section 230, both sides should have a clear understanding of not only the law and its implications, but also of the other side’s arguments. By analyzing the strengths and weaknesses of the arguments for and against Section 230, policymakers and stakeholders can hopefully avoid making changes to the law that cause more harm than good, and instead focus on solutions to some of the current problems online without eliminating the benefits of Section 230 for online services and their users.

About the Authors

Ashley Johnson (@ashleyjnsn) is a policy analyst at ITIF. She researches and writes about Internet policy issues such as privacy, security, and platform regulation. She was previously at Software.org: the BSA Foundation and holds a master’s degree in security policy from The George Washington University and a bachelor’s degree in sociology from Brigham Young University.

Daniel Castro (@CastroTech) is vice president at ITIF and director of its Center for Data Innovation. He writes and speaks on a variety of issues related to information technology and Internet policy, including privacy, security, intellectual property, Internet governance, e-government, and accessibility for people with disabilities.

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent, nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized by its peers in the think tank community as the global center of excellence for science and technology policy, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress.

For more information, visit us at www.itif.org.

Endnotes


 


[1]Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997).

[2]Ibid.

[3]Eric Goldman, “The Ten Most Important Section 230 Rulings,” Tulane Journal of Technology and Intellectual Property 20 (Fall 2017), 3, http://journals.tulane.edu/index.php/TIP/article/download/2676/2498.

[4]Batzel v. Smith, 333 F.3d 1018 (9th Cir. 2003).

[5]Jennifer Huddleston, “The FCC Should Not Engage in Section 230 Rulemaking,” Regulatory Transparency Project, October 6, 2020, https://regproject.org/blog/the-fcc-should-not-engage-in-section-230-rulemaking/.

[6]Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991); Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 N.Y. Misc. LEXIS 229 (N.Y. Sup. Ct. May 24, 1995); Zeran v. Am. Online, Inc., 129 F.3d 327 (4th Cir. 1997).

[7]“Section 230 Workshop – Nurturing Innovation or Fostering Unaccountability?” YouTube video, 1:18:35, posted by the U.S. Department of Justice, February 19, 2020, https://www.justice.gov/opa/video/section-230-workshop-nurturing-innovation-or-fostering-unaccountability.

[8]Jeff Kosseff, “The Gradual Erosion of the Law that Shaped the Internet: Section 230’s Evolution Over Two Decades,” The Columbia Science and Technology Law Review 18 (Fall 2016), 15–16, http://www.stlr.org/download/volumes/volume18/Kosseff.pdf.

[9]Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (en banc).

[10]FTC v. Accusearch Inc., 570 F.3d 1187 (10th Cir. 2009); Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1107 (9th Cir. 2009); E-Ventures Worldwide v. Google, No. 2:14-cv-646-FtM-29CM, 2016 U.S. Dist. LEXIS 62855 (M.D. Fla. May 12, 2016).

[11]Doe v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016).

[12]Jones v. Dirty World Entm’t Recordings LLC, 755 F.3d 398 (6th Cir. 2014).

[13]“Section 230 Workshop,” YouTube video, 2:40:55.

[14]Ibid., 2:09:00.

[15]Ibid., 2:12:00.

[16]Robert D. Atkinson et al., “A Policymaker’s Guide to the ‘Techlash’—What It Is and Why It’s a Threat to Growth and Progress” (ITIF, October 2019), https://itif.org/sites/default/files/2019-policymakers-guide-techlash.pdf.

[17]Federal Trade Commission, “FTC, Nevada Obtain Order Permanently Shutting down Revenge Porn Site MyEx,” Federal Trade Commission, June 22, 2018, https://www.ftc.gov/news-events/press-releases/2018/06/ftc-nevada-obtain-order-permanently-shutting-down-revenge-porn.

[18]Department of Justice Office of Public Affairs, “Justice Department Leads Effort to Seize Backpage.Com, the Internet’s Leading Forum for Prostitution Ads, and Obtains 93-Count Federal Indictment,” Department of Justice, April 9, 2018, https://www.justice.gov/opa/pr/justice-department-leads-effort-seize-backpagecom-internet-s-leading-forum-prostitution-ads.

[19]Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (en banc); FTC v. Accusearch Inc., 570 F.3d 1187 (10th Cir. 2009).

[20]Fields v. Twitter, Inc., 217 F. Supp. 3d 1116 (N.D. Cal. 2016).

[21]“Attorney General William P. Barr Delivers Opening Remarks at the DOJ Workshop on Section 230: Nurturing Innovation or Fostering Unaccountability,” the U.S. Department of Justice, February 19, 2020, https://www.justice.gov/opa/speech/attorney-general-william-p-barr-delivers-opening-remarks-doj-workshop-section-230.

[22]“Section 230 Workshop,” YouTube video, 3:27:25.

[23]See, for example, “‘A real gift to Big Tech’: Both parties object to immunity provision in USMCA,” Roll Call, December 17, 2019, https://www.rollcall.com/2019/12/17/a-real-gift-to-big-tech-both-parties-object-to-immunity-provision-in-usmca/; and Donald Trump, “Section 230, which is a liability shielding gift from the U.S. to “Big Tech” (the only companies in America that have it - corporate welfare!), is a serious threat to our National Security & Election Integrity…” Twitter, December 1, 2020, https://twitter.com/realDonaldTrump/status/1333965375193624578.

[24]Eric Goldman, “Barrett v. Rosenthal–California Issues Terrific Defense-Favorable Interpretation of 47 USC 230,” November 20, 2006, https://blog.ericgoldman.org/archives/2006/11/barrett_v_rosen_1.htm.

[25]Citron and Wittes, “The Internet Will Not Break,” 403.

[26]“Section 230 Workshop,” YouTube video, 2:32:20.

[27]Ibid., 2:17:20.

[28]Ibid., 3:14:10.

[29]Ibid., 1:24:15.

[30]Ibid., 3:18:10.

[31]Ibid., 3:25:55.

[32]47 U.S.C. § 230(a) (1996).

[33]Gabriel Stricker, ”The 2014 #YearOnTwitter,” Twitter, December 10, 2014, https://blog.twitter.com/en_us/a/2014/the-2014-yearontwitter.html.

[34]Auvil v. CBS 60 Minutes, 800 F. Supp. 928, 931 (E.D.Wash.1992), https://casetext.com/case/auvil-v-cbs-60-minutes-5.

[35]Sacha Baron Cohen, “Sacha Baron Cohen’s Keynote Address at ADL’s 2019 Never Is Now Summit on Anti-Semitism and Hate,” Anti-Defamation League, November 21, 2019, https://www.adl.org/news/article/sacha-baron-cohens-keynote-address-at-adls-2019-never-is-now-summit-on-anti-semitism.

[36]Sacha Baron Cohen, “The ‘Silicon Six’ Spread Propaganda. It’s Time to Regulate Social Media Sites,” The Washington Post, November 25, 2019, https://www.washingtonpost.com/outlook/2019/11/25/silicon-six-spread-propaganda-its-time-regulate-social-media-sites/.

[37]“Section 230 Workshop,” YouTube video, 3:14:55.

[38]“Defeating Disinformation Series: Social Media Regulation Around the World,” YouTube video, 45:30, posted by the Woodrow Wilson International Center for Scholars, February 5, 2020, https://www.wilsoncenter.org/event/defeating-disinformation-series-social-media-regulation-around-world.

[39]47 U.S.C. § 230(e) (1996).

[40]“Section 230 Workshop,” YouTube video, 2:39:05.

[41]Ibid., 1:53:05.

[42]“46 States + DC + One Territory Now Have Revenge Porn Laws,” Cyber Civil Rights Initiative, accessed February 26, 2020, https://www.cybercivilrights.org/revenge-porn-laws/.

[43]Daniel Castro and Alan McQuinn, “Why and How Congress Should Outlaw Revenge Porn” (ITIF, July 15, 2015), /publications/2015/07/15/why-and-how-congress-should-outlaw-revenge-porn.

[44]Bloomberg Government, “Transcript of Mark Zuckerberg’s Senate Hearing,” The Washington Post, April 10, 2018, https://www.washingtonpost.com/news/the-switch/wp/2018/04/10/transcript-of-mark-zuckerbergs-senate-hearing/.

[45]Catherine Padhi, “Ted Cruz vs. Section 230: Misrepresenting the Communications Decency Act,” Lawfare, April 20, 2018, https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepresenting-communications-decency-act.

[46]Matt Laslo, “The Fight Over Section 230—and the Internet as We Know It,” Wired, August 13, 2019, https://www.wired.com/story/fight-over-section-230-internet-as-we-know-it/.

[47]Harper Neidig, “GOP Steps Up Attack Over Tech Bias Claims,” The Hill, March 19, 2019, https://thehill.com/business-a-lobbying/434837-gop-steps-up-attack-over-tech-bias-claims.

[48]Elisa Shearer, “Social Media Outpaces Print Newspapers in the U.S. as a News Source,” Pew Research Center, December 10, 2018, https://www.pewresearch.org/fact-tank/2018/12/10/social-media-outpaces-print-newspapers-in-the-u-s-as-a-news-source/.

[49]Ending Support for Internet Censorship Act, S. 1914, 116th Cong. (2019).

[50]Ron Wyden, Twitter Post, June 19, 2019, 2:40 PM, https://twitter.com/RonWyden/status/1141415506810998784.

[51]Justin Amash, Twitter Post, June 19, 2019, 8:10 PM, https://twitter.com/justinamash/status/1141513644758437888.

[52]Donald Trump, “Preventing Online Censorship, Executive Order 13925 of May 28, 2020,” Code of Federal Regulations, title 3 (2020): 34079–34083, https://www.federalregister.gov/documents/2020/06/02/2020-12030/preventing-online-censorship.

[53]Ashley Johnson, “New Attempts to Amend Section 230 Would Impede Content Moderation When It Is Needed Most” (ITIF September 24, 2020), /publications/2020/09/24/new-attempts-amend-section-230-would-impede-content-moderation-when-it.

[54]47 U.S.C. § 230(a) (1996).

[55]“Section 230 as First Amendment Rule,” Harvard Law Review 131, no. 7 (May 2018), 2027, https://harvardlawreview.org/2018/05/section-230-as-first-amendment-rule/.

[56]Jack M. Balkin, “Free Speech and Hostile Environments,” Yale Law School Faculty Scholarship Series 253 (1999), 2, https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?article=1252&context=fss_papers.

[57]Manhattan Community Access Corp. v. Halleck, 587 U.S.  (2019).

[58]Alex Newhouse, ”Right-wing users flock to Parler as social media giants rein in misinformation,” PBS News Hour, December 3, 2020, https://www.pbs.org/newshour/nation/right-wing-users-flock-to-parler-as-social-media-giants-rein-in-misinformation.

[59]“Section 230 Workshop,” YouTube video, 2:31:50.

[60]Danielle Citron, “Tech Companies Get a Free Pass on Moderating Content,” Slate, October 16, 2019, https://slate.com/technology/2019/10/section-230-cda-moderation-update.html.

[61]Carrie Goldberg, “Herrick v. Grindr: Why Section 230 of the Communications Decency Act Must be Fixed,” Lawfare, August 14, 2019, https://www.lawfareblog.com/herrick-v-grindr-why-section-230-communications-decency-act-must-be-fixed.

[62]Herrick v. Grindr, LLC, No. 1:2017cv00932 - Document 21 (S.D.N.Y. 2017).

[63]Azmina Dhrodia, “Unsocial Media: The Real Toll of Online Abuse Against Women,” Medium, November 17, 2017, https://medium.com/amnesty-insights/unsocial-media-the-real-toll-of-online-abuse-against-women-37134ddab3f4.

[64]“Section 230 as First Amendment Rule,” Harvard Law Review, 2041.

[65]“Section 230 Workshop,” YouTube video, 58:30.

[66]Ibid., 3:41:00.

[67]Emily Stewart, “Ron Wyden Wrote the Law That Built the Internet. He Still Stands by it – and Everything it’s Brought With It,” Recode, May 16, 2019, https://www.vox.com/recode/2019/5/16/18626779/ron-wyden-section-230-facebook-regulations-neutrality.

[68]47 U.S.C. § 230(b) (1996).

Back to Top