ITIF Logo
ITIF Search

Overview of Section 230: What It Is, Why It Was Created, and What It Has Achieved

February 22, 2021

In the ongoing debate over whether and how to amend Section 230 of the Communications Decency Act, it is important to understand the law’s origins, meaning, and impact—because, for better or worse, it has shaped the modern Internet.

KEY TAKEAWAYS

Section 230 includes two main provisions: one that protects online services and users from liability when they do not remove third-party content, and one that protects them from liability when they do remove content.
Section 230 is not a limitless legal shield. Online services are still liable for violating federal criminal or copyright law, or for violating federal or state sex-trafficking law.
Congress enacted Section 230 to remove legal barriers that would disincentivize online services from moderating content and to encourage continued growth and development of the Internet in its early years.
Section 230 has allowed a wide variety of business models to flourish online and is partly responsible for creating the modern Internet.
Without Section 230’s legal protections, online services would face large legal expenses that would be detrimental to competition, innovation, and the U.S. economy.

Introduction

What is Section 230?

Why was Section 230 Created?

What has Section 230 Achieved?

Endnotes

Introduction

Who is responsible for illegal speech? Historically, the laws on intermediary liability—the extent to which a third party is legally responsible for the speech of others—have treated those who publish content differently from those who distribute it. Booksellers and newsstands, for example, are not legally responsible when a book or newspaper they distribute contains illegal content, but the publishers of that illegal content are legally responsible. The rise of online services—from early blogs and forums to search engines, social media, online marketplaces, and more—complicated the issue because they blurred the lines between distributors and publishers. In the early days of the Internet, Congress had not yet clarified the issue of intermediary liability for online services, leaving it to the courts to interpret based on an outdated legal precedent that no longer reflected the realities of an Internet-enabled world.

To address this problem, Congress moved to pass legislation that would provide clarity and create an innovation-friendly environment online. The goal was to allow online services to moderate content on their platforms in good faith, removing harmful or illegal content while still providing a forum for free speech and a diversity of opinions. This law, Section 230 of the Communications Decency Act of 1996, has since become the central piece of legislation governing online intermediary liability in the United States.

In the decades since, as the landscape of the Internet has changed—and particularly with the rise of social media—Section 230 has attracted plenty of attention and debate. On one side, the law’s advocates argue that the modern Internet relies on Section 230’s liability protections. The law allows online services that rely on user-generated content to get meritless lawsuits against them dismissed without going through much more lengthy and expensive litigation. Moreover, by not holding platforms responsible for the speech of their users, these online services can choose to moderate their platforms to serve the best interest of their community, creating a forum for free speech and open discourse. On the other side, opponents argue that Section 230 is overly broad, shielding bad actors as well as good ones. Victims of defamation, harassment, and other online abuses cannot hold online services accountable for the harm their users cause. Opponents claim this liability shield removes incentives for online services to remove harmful or illegal content and prevent harmful or illegal activity from happening on their platforms. Others worry that Section 230’s liability protections give online platforms too much freedom to selectively silence voices they disagree with.

Most people are unaware of the law that, for better or worse, shaped the modern Internet. Even some policymakers get the details of Section 230 wrong.

While this debate rages on, most people are unaware of the law that, for better or worse, shaped the modern Internet. Even some policymakers get the details of Section 230 wrong.[1] As policymakers on both sides of the aisle take aim at the law, members of Congress introduce legislation that would amend it, and lawmakers and regulators from other countries criticize the American approach to intermediary liability, it is more important than ever for everyone involved to understand Section 230, the arguments for and against it, and the potential ways forward.[2] This series of Information Technology and Information Foundation (ITIF) reports address these questions and more in an effort to inform the debate around Section 230 and raise awareness of the complex issues that surround it.

However, it is important to recognize that Section 230 has become a political football, particularly as policymakers debate how large social media platforms should treat political speech. On one side, critics point to the proliferation of harmful political speech, including hate speech and disinformation, on social media as a sign that these platforms are not doing enough. On the other side, critics accuse social media platforms of unfairly limiting users from sharing unpopular political opinions. However, most of these objections to harmful political speech on social media are related more to the First Amendment than Section 230. As such, proposals to address political speech on social media will need to go beyond changes to this law. ITIF will follow up with a separate report that examines the debate over political speech online outside of the narrow lens of intermediary liability.

What is Section 230?

The oft-cited Section 230(c)(1) states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”[3] In practice, this means online services are not liable for defamatory or otherwise unlawful content their users post. Section 230(c)(2) elaborates, stating that online services are not liable for “any action voluntarily taken in good faith to restrict access to or availability of [objectionable content.]”[4] This second provision protects online services from liability for engaging in content moderation and enforcing their community standards.

As one court explained, the distinction between Section 230(c)(1) and Section 230(c)(2) is the former “blocks civil liability when web hosts and other Internet service providers (ISPs) refrain from filtering or censoring the information on their sites,” while the latter ensures a provider “that does filter out offensive material is not liable to the censored customer.”[5] Another distinction is Section 230(c)(2) protection only applies to actions “taken in good faith.” This “good faith requirement” does not exist in the broader Section 230(c)(1).[6] Courts have explained that this requirement exists to protect online services that remove objectionable content or mistakenly remove content that is not objectionable without protecting those that remove content for anticompetitive or other malicious reasons.[7]

Section 230(e) outlines the exceptions to the section’s liability protections and its relationship to state and local laws. Specifically, Section 230 does not apply to federal criminal and intellectual property law. As of 2018, with the passage of the Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act (FOSTA-SESTA), which amended Section 230, the section also no longer applies to federal and state sex-trafficking law. Section 230 does not prevent states from enforcing their own “consistent” laws, but it does preempt state and local laws that are “inconsistent” with the section.[8] In Voicenet v. Corbett (2006) and Backpage v. McKenna (2012), the courts reinforced that online services are not liable under inconsistent state criminal laws; and in Perfect 10 v. CCBill (2007), the court established that online services are also not liable under inconsistent state intellectual property laws.[9]

When determining whether Section 230 applies to a specific case, courts use a three-pronged liability test. First, the defendant must be a “provider or user of an interactive computer service.” An interactive computer service, as defined by the law, “provides or enables computer access by multiple users to a computer server.” As an example of how Section 230 can apply to both providers and users, imagine a Twitter user tweeting a defamatory statement. Twitter, the provider, would not be liable for that user’s defamation, nor would other users who retweeted the original tweet; only the original poster would be liable for defamation.

Second, in the context of the case, the defendant must not be an “information content provider,” or a “person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet.” In other words, if an online service played a part in the creation of illegal content, it can be liable for that content. Finally, the plaintiff’s claim must treat the defendant as the “publisher or speaker” of the content in question.[10] If a defendant can show that a case meets all three requirements, Section 230 applies and courts will not hold them liable. If a defendant cannot show that a case meets all these requirements, courts will not dismiss it.

Why was Section 230 Created?

The rationale for Section 230 arose from a pair of court cases in the 1990s that illustrated how existing U.S. law was ill-equipped to handle intermediary liability issues that arose with the rise of the Internet. The first of these cases was Cubby v. CompuServe (1991).[11] The defendant, CompuServe, was an online service that hosted forums dedicated to various topics, including the “Journalism Forum.” One of the publications available to the forum’s subscribers was Don Fitzpatrick’s Rumorville, a daily newsletter. Plaintiff Robert Blanchard, owner of Cubby, Inc., sued Fitzpatrick for posting allegedly defamatory remarks about his competing news source, Skuttlebut. He also sued CompuServe for hosting the content.

The plaintiff argued that CompuServe had acted as Rumorville’s publisher and was therefore liable for Fitzpatrick’s statements. But because CompuServe had no firsthand knowledge of the contents of Rumorville, had no control over its publication, and had no opportunity to review the newsletter’s contents, the U.S. District Court for the Southern District of New York ruled that CompuServe had acted as a distributor, not a publisher, and was therefore not liable for the defamatory statements published in Rumorville.

This distinction between publishers, which are liable for the statements they circulate, and distributors—such as a bookstore or a newsstand—which are not, emerged from the Supreme Court case Smith v. California (1959). That case dealt with a Los Angeles city ordinance that criminalized the possession of obscene books. After bookseller Eleazar Smith was convicted of being in violation, he appealed on the grounds that the ordinance was unconstitutional. The Supreme Court agreed, stating that if ordinances such as Los Angeles’ were permitted, “Every bookseller would be placed under an obligation to make himself aware of the contents of every book in his shop. It would be altogether unreasonable to demand so near an approach to omniscience.”[12]

A few years after Cubby, the New York Supreme Court heard a similar case, Stratton Oakmont v. Prodigy (1995).[13] The defendant was another online service, Prodigy Services Company. An anonymous user posted defamatory statements on Prodigy’s “Money Talk” bulletin board about Daniel Porush and his brokerage firm, Stratton Oakmont, Inc. The court took a conflicting approach to Stratton Oakmont. Because Prodigy had a set of content guidelines that outlined rules for user-generated content, a software program that filtered out offensive language, and moderators who enforced the content guidelines—mechanisms the court referred to as forms of “editorial control”—the court held that Prodigy was a publisher, not a distributer, and therefore was liable for the defamatory statements posted on the “Money Talk” bulletin board.

Then-representatives Chris Cox (R-CA) and Ron Wyden (D-OR) felt that the Cubby and Stratton Oakmont decisions were counter to how they believed the Internet should operate. Online services that exercised no control over what users posted on their platforms and allowed any and all content—including potentially unlawful or abusive content—were protected. On the other hand, online services that exercised good faith efforts to moderate content and remove potentially unlawful or abusive material were punished. Cox and Wyden wanted to address this discrepancy and create legislation that would encourage free speech online and allow online services to engage in content moderation without fear of liability.[14]

To accomplish this, they introduced an amendment to the Communications Decency Act (CDA) named for its authors: the Cox-Wyden Amendment. The CDA was a response to the rise of online pornography; it would regulate obscenity and indecency online.[15] Congress tacked the CDA onto the much larger Telecommunications Act of 1996, which updated U.S. telecommunications law for the first time in over 60 years in light of technological innovations such as the Internet, and the Cox-Wyden Amendment became Section 230 of the bill. President Clinton signed the Telecommunications Act into law on February 8, 1996.[16]

A year later, a group of 20 plaintiffs—including the American Civil Liberties Union (ACLU), the Electronic Frontier Foundation, the Electronic Privacy Information Center, Human Rights Watch, Planned Parenthood, and others—challenged two provisions in the CDA that criminalized sharing “patently offensive” or “indecent” content online where minors could potentially view it. A separate group of 30 organizations, including the American Library Association, and 50,000 individuals also challenged the law, and the two cases together went to the Supreme Court.[17] In Reno v. ACLU (1997), the Supreme Court ruled that the provisions in question were unconstitutional and violated the First Amendment freedom of speech.[18] Although key parts of the CDA were struck down, Section 230 remained.

Section 230(a) and Section 230(b) go into greater detail on Congress’s motivations for including Section 230 in the CDA. The Internet, Section 230(a) explains, offers users a host of benefits: access to educational and informational resources, control over the information they receive, a forum for political discourse, opportunities for cultural development, and avenues for intellectual activity. It has also “flourished, to the benefit of all Americans, with a minimum of government regulation.”[19] In light of this, the purpose of Section 230 is “to promote the continued development of the Internet,” to preserve the free market on the Internet, and to encourage blocking and filtering of objectionable content.[20]

What has Section 230 Achieved?

Proponents refer to Section 230 as “a core pillar of [I]nternet freedom” and “the most important law protecting Internet speech,” responsible for creating “the Internet as we know it.”[21] By protecting online services from facing liability for third-party content, it has paved the way for a variety of business models that rely on a wide variety of user-generated content, and in doing so has shaped the online economy. It is impossible to know how the Internet might have developed differently were it not for Section 230, but it is clear that many common features of websites, such as user reviews and comments, exist because of the liability protection offered by Section 230.

Knowledge-sharing websites such as Wikipedia, which rely on user contributions for content, and volunteer editors for “quality control,” also are granted liability protection under Section 230. According to the Wikimedia Foundation, this “community self-governance model” would not be possible without Section 230’s liability shield.[22] Section 230 also protects millions of small blogs, forums, and other websites that publish user content from expensive lawsuits for speech by their users or their attempts to moderate this speech. A cost report of litigating claims based on user speech found that the pre-complaint stage can cost a start-up up to $3,000, and the motion-to-dismiss stage can cost $15,000 to $80,000, a significant cost for a small company. But without Section 230 granting start-ups the ability to dismiss cases against them, their legal expenses would pile up even higher, ranging anywhere from $100,000 to $500,000 or more for each case that reaches the discovery stage.[23]

Section 230 also protects websites that host product and business reviews, which have transformed the way people choose what they buy; a survey of American adults found that 67 percent of respondents check online reviews most of the time or every time they purchase a product, and 85 percent would be less likely to purchase products online that don’t have reviews.[24] Certain industries rely on reviews, with the same survey finding that reviews make respondents feel safer using ridesharing, vacation rental, and maintenance services, and that at least 40 percent wouldn’t use such a service without user reviews.

Perhaps most famously, Section 230 shields social media platforms from liability for their users’ posts and, as its writers intended, allows them to engage in “good faith” content moderation. With how active Internet users are on social media, it’s nearly impossible for these platforms to remove every potentially unlawful or offensive post. For example, every sixty seconds, Facebook’s billions of users post an average of 510,000 comments, 292,000 status updates, and 136,000 photos.[25] Facebook has 15,000 content reviewers working around the world to remove posts that violate the website’s community standards, yet still some posts manage to slip through the cracks.[26] Similarly, as of 2018, YouTube had over 500 hours of new video content uploaded every minute; and the website has over 10,000 people reviewing that content.[27]

In a world without Section 230 protections, online services would have to be aware of every post on their websites and make the correct content moderation decision 100 percent of the time to avoid liability for their users’ speech. If, as the Supreme Court ruled in Smith v. California, it’s unreasonable to expect a bookseller to know the contents of a few hundred books, it’s even more unreasonable to expect the operators of a social media platform to know the contents of thousands, millions, or even billions of posts, even when using thousands of human moderators and advanced algorithms.

Section 230 enables and protects a wide variety of businesses and business models, creating an enormous economic impact. One study sponsored by the Internet Association, a trade group of Internet companies, estimated that, without the strong liability protections in Section 230, the United States could lose $440 million in gross domestic product (GDP) and 4.25 million jobs within a decade.[28]

Section 230 enables online services not only to host user-generated content, but also to remove it without fear of reprisal. Federal Agency of News v. Facebook (2019) provides an example of how important this is. The Federal Agency of News (FAN) a news website based in St. Petersburg, Russia, was allegedly involved in Russia’s campaign to interfere in the 2016 U.S. general election. In its efforts to stop the spread of disinformation, Facebook banned FAN from its platform. FAN then sued Facebook for violating its freedom of speech. A U.S. District Court judge dismissed the case, citing Section 230, among other reasons.[29]

In this way, argues Senator Ron Wyden (D-OR), Section 230 acts as a shield, by protecting online services when they overlook potentially objectionable content, and as a sword, by also protecting them when they remove potentially objectionable content.[30] Both functions are important, and both have shaped the modern Internet. The shield protects websites that host user-generated content by allowing anyone with access to a computer or smartphone to share their opinions, connect with their friends, and access information and entertainment. The sword allows those same websites to filter out violent or graphic content, harassment, misinformation, hate speech, and other objectionable content, thereby creating a better user experience.

As the ongoing debate surrounding Section 230 has led to efforts to amend the law, it is important for supporters, critics, and policymakers to understand the context and history of Section 230 in order to hold a productive dialogue. It is also important to understand the many ways Section 230 has benefited online services—from large tech companies to small, independent websites—and their billions of users, and that any changes do not unintentionally eliminate or reduce these benefits.

About the Authors

Ashley Johnson (@ashleyjnsn) is a policy analyst at ITIF. She researches and writes about Internet policy issues such as privacy, security, and platform regulation. She was previously at Software.org: the BSA Foundation and holds a master’s degree in security policy from The George Washington University and a bachelor’s degree in sociology from Brigham Young University.

Daniel Castro (@CastroTech) is vice president at ITIF and director of its Center for Data Innovation. He writes and speaks on a variety of issues related to information technology and Internet policy, including privacy, security, intellectual property, Internet governance, e-government, and accessibility for people with disabilities.

About ITIF

The Information Technology and Innovation Foundation (ITIF) is an independent, nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized by its peers in the think tank community as the global center of excellence for science and technology policy, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress.

For more information, visit us at www.itif.org.

Endnotes


 


[1]Elliot Harmon, “No, Section 230 Does Not Require Platforms to be ‘Neutral,’” Electronic Frontier Foundation, April 12, 2018, https://www.eff.org/deeplinks/2018/04/no-section-230-does-not-require-platforms-be-neutral.

[2]Makena Kelly, “Joe Biden Wants to Revoke Section 230,” The Verge, January 17, 2020, https://www.theverge.com/2020/1/17/21070403/joe-biden-president-election-section-230-communications-decency-act-revoke; Matthew Feeney and Will Duffield, “Holding Section 230 Hostage, Sen. Graham Demands Platforms EARN IT,” CATO Institute, February 5, 2020, https://www.cato.org/blog/holding-230-hostage-sen-graham-demands-platforms-earn-it;

Alexandra S. Levine, “British MP Damian Collins Talks Tech at POLITICO,” POLITICO, February 5, 2020, https://www.politico.com/newsletters/morning-tech/2020/02/05/british-mp-damian-collins-talks-tech-at-politico-785070.

[3]47 U.S.C. § 230(c)(1) (1996).

[4]Ibid.

[5]John Doe v. GTE Corporation, 347 F.3d 655 (2003).

[6]E-Ventures Worldwide v. Google, No. 2:14-cv-646-FtM-29CM, 2016 U.S. Dist. LEXIS 62855 (M.D. Fla. May 12, 2016).

[7]e360Insight, LLC v. Comcast Corp., 546 F. Supp. 2d 605 (N.D. Ill. 2008); Zango, Inc. v. Kaspersky Lab, Inc., 568 F.3d 1169 (9th Cir. 2009) (Fisher, J., concurring).

[8]47 U.S.C. § 230(e) (1996).

[9]Voicenet Communications, Inc. v. Corbett, 2006 WL 2506318 (E.D. Pa. 2006); Backpage.com, LLC v. McKenna, 881 F. Supp. 2d 1262 (W.D. Wash. 2012); Perfect 10, Inc. v. CCBill LLC, 488 F. 3d 1102 (9th Cir. 2007).

[10]Kathleen Ann Ruane, “How Broad a Shield? A Brief Overview of Section 230 of the Communications Decency Act” (Congressional Research Service, February 2018), https://fas.org/sgp/crs/misc/LSB10082.pdf, 1–2.

[11]Cubby, Inc. v. CompuServe, Inc., 776 F. Supp. 135 (S.D.N.Y. 1991).

[12]Smith v. California, 361 U.S. 147 (1959).

[13]Stratton Oakmont, Inc. v. Prodigy Servs. Co., No. 31063/94, 1995 N.Y. Misc. LEXIS 229 (N.Y. Sup. Ct. May 24, 1995).

[14]Emily Stewart, “Ron Wyden Wrote the Law That Built the Internet. He Still Stand By It – And Everything it’s Brought With It,” Recode, May 16, 2019, https://www.vox.com/recode/2019/5/16/18626779/ron-wyden-section-230-facebook-regulations-neutrality.

[15]47 U.S.C. § 223 (1996).

[16]“S. 652 – Telecommunications Act of 1996,” Congress.gov, accessed January 31, 2020, https://www.congress.gov/bill/104th-congress/senate-bill/652/text.

[17]“Reno v. ACLU – Challenge to Censorship Provisions in the Communications Decency Act,” American Civil Liberties Union, updated June 20, 2017, https://www.aclu.org/cases/reno-v-aclu-challenge-censorship-provisions-communications-decency-act.

[18]Reno v. American Civil Liberties Union, 521 US 844 (1997).

[19]47 U.S.C. § 230(a) (1996).

[20]Ibid.

[21]Sarah Jeong, “A New Bill to Fight Sex Trafficking Would Destroy a Core Pillar of Internet Freedom,” The Verge, August 1, 2017, https://www.theverge.com/2017/8/1/16072680/cda-230-stop-enabling-sex-traffickers-act-liability-shield-senate-backpage.

“Section 230 of the Communications Decency Act,” Electronic Frontier Foundation, accessed February 7, 2020, https://www.eff.org/issues/cda230; Matt Laslo, “The Fight Over Section 230—and the Internet as We Know It,” Wired, August 13, 2019, https://www.wired.com/story/fight-over-section-230-internet-as-we-know-it/.

[22]Leighanna Mixter, “Three Principles in CDA 230 That Make Wikipedia Possible,” Wikimedia Blog, November 9, 2017, https://blog.wikimedia.org/2017/11/09/cda-230-principles-wikipedia/.

[24]Internet Association, “Best of the Internet: Consumers Rely on User-Generated Content” (Internet Association, June 2019), https://internetassociation.org/files/ia_best-of-the-internet-survey_06-26-2019_content-moderation/.

[25]Dan Noyes, “The Top 20 Valuable Facebook Statistics – Updated November 2019,” Zephoria, October 30, 2019, https://zephoria.com/top-15-valuable-facebook-statistics/.

[26]“Facts About Content Review on Facebook,” Facebook, December 28, 2018, https://about.fb.com/news/2018/12/content-review-facts/.

[27]Ammar Frangoul, “With over 1 billion users, here’s how YouTube is keeping pace with change,” CNBC, March 14, 2018, https://www.cnbc.com/2018/03/14/with-over-1-billion-users-heres-how-youtube-is-keeping-pace-with-change.html; Sam Levin, “Google to hire thousands of moderators after outcry over YouTube abuse videos,” The Guardian, December 5, 2017, https://www.theguardian.com/technology/2017/dec/04/google-youtube-hire-moderators-child-abuse-videos.

[28]Christian M. Dippon, “Economic Value of Internet Intermediaries and the Role of Liability Protections” (NERA Economic Consulting, June 2017), https://internetassociation.org/wp-content/uploads/2017/06/Economic-Value-of-Internet-Intermediaries-the-Role-of-Liability-Protections.pdf.

[29]Federal Agency of News LLC v. Facebook, Inc., 395 F. Supp. 3d 1295 (N.D. Cal. 2019).

[30]Stewart, “Ron Wyden Wrote the Law.”

Back to Top