Proposals to Reform Section 230
In the wide-ranging debate over Section 230, there have been calls to keep the law as it is, repeal it entirely, or reform it. The best approach would be for Congress to pass targeted reforms that address specific harms without unduly burdening online services.
KEY TAKEAWAYS
Establish Size-Based Carve-Outs
Establish Carve-Outs for Certain Types of Content or Activity
Use Liability Protection as a Bargaining Chip
Exempt Federal Civil Enforcement
Eliminate the “Or Otherwise Objectionable” Clause
Establish a Good Faith Requirement
Introduction
Section 230 of the Communications Decency Act of 1996 is a vitally important law that governs intermediary liability for online services and Internet users in the United States. While the First Amendment gives online services the right to allow or deny lawful speech on their platforms, Section 230 says that these online services are not liable for unlawful third-party content, even when these services make decisions to allow or deny third-party content. This liability protection has had a profound impact on the development of many online services Internet users enjoy daily, including social networks, online retailers, online games, news sites, podcasts, blogs, and more.
Recently, the law has received extraordinary attention from policymakers and pundits, with prominent voices on both sides of the political spectrum blaming the law for a variety of both real and perceived harms on the Internet, including harassment, hate speech, disinformation, violent content, child sexual abuse material, and nonconsensual pornography. Many critics have grown vocal in arguing that the law is broken and are calling for Congress to repeal the law entirely, while others argue that the law should be amended to address concerns its authors could not have envisioned. However, many of the law’s proponents say that Section 230 is still appropriate and effective more than two decades after Congress enacted the law, and that attempts to change it, especially repealing it, would come with far-reaching negative consequences.
While it is true that many proposals to eliminate or alter Section 230 would undermine online services and pose a major setback to free speech and innovation, that does not mean some targeted reforms are not needed.
Especially in the aftermath of the attack on the U.S. Capitol, criticism of political speech on social media has reached a crescendo. President Trump, along with many of his supporters on the right, have argued that social networks are unfairly removing lawful content, alleging political bias in response to social networks banning accounts linked to far-right groups and conspiracy theories, and labeling some posts as false or misleading. At the same time, President Biden, along with many on the left, have argued that social media companies are too permissive, allowing or even fostering extremist views on their platforms and failing to take sufficient action to moderate harmful political speech. Since the First Amendment prevents policymakers from regulating online speech directly, many have used the threat of Section 230 reform to try to compel social media platforms to either tighten or loosen their content moderation policies. As a result, Section 230 has become a political football; but Section 230 reform is orthogonal at best to address political speech on online platforms .
While it is true that many proposals to eliminate or alter Section 230 would undermine online services and pose a major setback to free speech and innovation, that does not mean some targeted reforms are not needed. Indeed, as this report shows, it is possible to narrow the liability shield to avoid protecting “bad actors” that are not acting in good faith, while also establishing a voluntary safe harbor provision to minimize nuisance lawsuits and negative spillover effects on innovation. But while reforming Section 230 could address many harms on the Internet, it would not resolve the ongoing debate about political speech, which is grounded more in a debate about the First Amendment and the right set of rules to moderate political speech on large social media platforms than in online intermediary liability. That issue is the subject of a forthcoming Information Technology and Information Foundation (ITIF) report.
This report reviews most of the major proposals for addressing Section 230, including proposals that congress:
- Preserve Section 230 as it is.
- Repeal Section 230.
- Establish size-based carve-outs.
- Establish carve-outs for certain types of content or activity.
- Require online services to comply with a notice-and-takedown requirement.
- Require “bargaining chips” to receive liability protection.
- Exempt state criminal laws.
- Expand federal criminal laws.
- Expand federal civil enforcement.
- Eliminate the “or otherwise objectionable” clause.
- Establish a “good faith” requirement.
As the report shows, there are a number of options besides keeping Section 230 as it is and repealing it entirely. Each proposed solution has arguments for and against it, but some are more likely to succeed than others.
The report concludes by offering recommendations for how Congress can move forward to address legitimate concerns about Section 230’s shortcomings while safeguarding the benefits of the law. To that end, Congress should take the following steps:
- Establish a good faith requirement to prevent bad actors from taking advantage of Section 230(c)(1)’s liability shield.
- Establish a voluntary safe harbor provision to limit financial liability for online services that adhere to standard industry measures for limiting illegal activity.
- Expand federal criminal laws around harmful forms of online activity that are also illegal at the state level.
Notably, as explained later in this report, the establishment of a good faith requirement or a safe harbor provision would be problematic on their own. However, if pursued jointly as part of a Section 230 reform, they would address the weaknesses of implementing either proposal independently.[1]
Preserve Section 230
One potential solution to the issue of online intermediary liability would be to keep the law in the United States as it is. Many, but not all, proponents of Section 230 argue for this approach on the grounds that Section 230 is responsible for creating many of the best parts of the Internet, and that changes to the law would have serious, and potentially unforeseen, consequences for the online world. Although Section 230 may not be a perfect law, its proponents believe that its myriad benefits outweigh its few flaws.
It is impossible to know exactly how the Internet would have developed without Section 230, but the online world would almost certainly look very different than it does today, likely with less freedom of expression and less of the user-generated content that now forms the backbone of some of the Internet’s most visited websites. Indeed, protecting the Internet as it is today is a frequent argument for preserving the liability shield the way it is.[2]
The types of websites and online platforms that benefit from Section 230’s liability shield are as diverse as the Internet itself. A lot of the recent controversy surrounding Section 230 primarily focuses on social media giants such as Facebook and Twitter and popular video sharing platforms such as YouTube, but the influence of Section 230 extends much farther. It protects knowledge-sharing websites such as Wikipedia, online marketplaces such as eBay, online classified ads such as Craigslist, countless smaller forums and blogs, and every other website that features product reviews or a comments section, including countless websites of small businesses. It also protects users from liability for forwarding emails or retweeting, thereby facilitating communication between users.
Section 230 protects online services from a wave of lawsuits that could attempt to hold them liable for their users’ actions. By allowing these services to thrive, Section 230 forms the foundation of the Internet economy. It has enabled the creation of entire business models that rely on user-generated content.
Section 230 makes it easier for smaller online services to compete with larger ones. In a world without Section 230, larger tech companies would have the resources to defend themselves against lawsuits and bulk up their content moderation systems, while smaller online services would not.[3] Smaller online services that rely on user-submitted content—or large-but-less-profitable ones such as Wikipedia, which is run by the nonprofit Wikimedia Foundation—would have to make the difficult decision of whether to continue operating and risk litigation they cannot afford, fundamentally change the services they offer to decrease their risk, or shut down entirely. Such change would further consolidate market share in the hands of a few large online services, giving a boost to some of the social media giants that are the target of much of the anti-Section 230 rhetoric.
Finally, many proposed changes to Section 230 would have serious implications for the freedom of speech online. Without Section 230 guaranteeing that they will not face liability for third-party content on their platforms, online services would have strong incentives to take a more restrictive approach to content moderation. Instead of just removing content that clearly violates the law or their terms of service, they would also likely remove any content that falls into a gray area where it may or may not be objectionable, because to not do so would mean risking legal trouble. This is known as “collateral censorship,” a form of self-censorship that occurs “when A censors B out of fear that the government will hold A liable for the effects of B’s speech.”[4] For example, platforms may choose to remove lawful, but controversial, political speech—exactly the type of speech the First Amendment was designed to protect—in order to avoid expensive nuisance lawsuits from those who claim to find that political speech objectionable.
Any changes to Section 230 will have far-reaching consequences, but given the current controversy surrounding the law, doing nothing is increasingly not a politically feasible option. The calls for reform are part of a larger trend of public backlash against Big Tech—or “techlash”—that do not appear to be going away any time soon. If Section 230’s supporters refuse to budge from their stance that Section 230 should remain exactly the way it is, they will effectively hand the reins over to the law’s detractors to craft a new intermediary liability law that may go too far in the other direction. Instead, to address legitimate concerns about stopping bad actors, supporters should offer solutions that still protect freedom of expression and innovation.
Repeal Section 230
Some of Section 230’s critics want to repeal the law altogether and leave the issue of online intermediary liability to the courts. They argue that the law does more harm than good, unfairly protecting bad actors, enabling various forms of illegal or harmful online content, immunizing providers from liability for unfairly removing users and content, and giving online services a free pass that no other type of business enjoys. For example, Rep. Louie Gohmert (R-TX) introduced H.R. 8896, the Abandoning Online Censorship Act, to repeal Section 230.[5]
The first argument, that Section 230 protects websites that host illegal content, is a common one. Critics frequently refer to the so-called bad actors that hide behind 230’s liability shield. Before Congress passed the Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) in 2018, adding an exception to Section 230 so that it would no longer apply to sex trafficking, critics frequently cited Backpage as an example of a bad actor.
But sites such as Backpage are not the only bad actors online. There are untold numbers of online services whose users post child sexual abuse material, nonconsensual pornography, defamatory “gossip,” terrorist communication, and more. While some of this illicit content slips through the cracks of the content moderation systems of legitimate platforms, other platforms do little to stop it. By immunizing platforms against civil liability for third-party content, critics argue, Section 230 prevents the victims of these crimes from seeking justice against the online services that possibly could have prevented others from sharing this content.
In addition to hosting illegal content, some online services are a source of legal but harmful forms of online abuse, including hate speech and harassment. Online abuse can lead victims to delete their social media profiles, shut down their websites and blogs, and in extreme cases when online abuse trickles into the physical world, move and change their name or engage in self-harm.[6] Because online abuse disproportionately affects marginalized populations, this is detrimental to equal protection.[7] Again, because of Section 230, victims cannot sue social media platforms for failing to act against hate speech and harassment posted by their users.
Some conservative policymakers, including former President Trump, call for repealing Section 230. They believe it is unfair for large social media platforms to benefit from Section 230’s liability shield when, in their view, these sites are biased against conservative viewpoints, blocking or suspending accounts from conservatives and removing posts that express conservative political opinions. There is no evidence of systemic conservative bias, and the First Amendment protects the free speech rights of these platforms to make decisions about what content and which users they allow on their platforms.[8] However, Section 230(c)(2) protects these companies from liability for removing content they believe to be objectionable.[9] Eliminating Section 230 would expose these companies to nuisance lawsuits.
Some liberal policymakers, including President Biden, have called for repealing Section 230, but for the opposite reason. They believe that it is unfair for large social media platforms to benefit from Section 230’s liability shield when users spread hate speech, misinformation, and other objectionable content on their platforms. However, repealing Section 230 would negatively impact the free speech of marginalized populations that these policymakers are often trying to protect. Online services would be disinclined to host content relating to controversial political movements such as #MeToo or Black Lives Matter if individuals and groups who opposed those movements, including the targets of their activism, could sue the online services that hosted their discussions and facilitated their organization.
Finally, some critics argue that Section 230 treats online services differently from other businesses. If a physical business facilitated child exploitation or terrorist communication, or if a traditional publication printed user-submitted nonconsensual pornography or defamatory statements, they likely would not escape civil liability. Why, they ask, is the law different for online services, especially since many websites profit from user-submitted content, including illegal content? Critics argue that if moderating that content proves difficult, online services should solve the problem or design their services in a less negligent way to prevent these problems from occurring in the first place.[10]
But the legal landscape prior to Section 230’s passage reveals how repealing the law would be detrimental. Section 230 arose out of a pair of court cases in the 1990s: Cubby v. CompuServe (1991) and Stratton Oakmont v. Prodigy (1995).[11] Taken together, these cases established a counterintuitive precedent for websites that rely on user-generated content: Websites that exercised no control over what was posted on their platforms and allowed all content would not be liable for user content, while websites that exercised good faith efforts to moderate content would face liability. This is the legal landscape America would return to if Congress repealed Section 230.
Some critics argue for repealing Section 230 and also overturning the Cubby and Stratton Oakmont cases that made online services that moderate content liable for their users’ speech, so online services would still have an incentive to moderate content. But even without that legal precedent, repealing Section 230 would still have negative consequences for innovation, free speech, and competition. Large online services would adapt to a world without Section 230, while smaller ones may not have the resources, which would only further consolidate the market share of large platforms. Moreover, platforms would turn to overly cautious and restrictive content moderation practices, removing any potentially objectionable content, which may include valuable forms of expression such as political speech and marginalized speech.
Establish Size-Based Carve-Outs
One proposal to reform Section 230 would introduce size-based carve-outs for intermediary liability so that only large online services would lose Section 230 protection. In other words, Section 230 would only apply to smaller companies, not large ones. The purpose would be to safeguard competition from smaller online services that would not survive without Section 230 protections. This type of proposal is also a manifestation of the ongoing techlash, as it aims to create stricter rules for tech giants for their perceived content moderation failures.[12]
The problem with Section 230, these critics argue, is that the law says online services that host third-party content “shall not be treated as the publisher or speaker” of that content. But large social media platforms are like publishers in two important ways.[13]
First, large social media platforms actively moderate content, deciding what content appears on their platforms and what is taken down. This is not too different from how some early forums and online bulletin boards operated. The difference, critics claim, is that large social media platforms such as Facebook and Twitter are far more ubiquitous than their 1990s counterparts, and their content moderation decisions impact hundreds of millions or even billions of users.[14]
Second, social media platforms amplify content, running algorithms that determine who sees what, and sometimes these algorithms promote harmful content.[15] Critics argue that when large platforms amplify harmful content, the impact is so significant (because hundreds of millions of users may see it), they should be liable for this content.[16]
The first problem with size-based carve-outs is, counterintuitively, they would actually be detrimental to competition. A small online company would benefit from Section 230 immunity, which would hopefully enable it to succeed and grow. But as it grew and approached the threshold at which it would lose immunity, it would have to make a difficult decision: pass the threshold and adapt on its own to a difficult new set of rules, or get acquired by a larger company that has already established its ability to succeed without immunity. Acquisition by a large, successful company is already a tempting offer; size-based carve-outs would further incentivize small companies to get acquired instead of continuing to grow on their own.[17]
Additionally, virtually all the “bad actors” critics reference when debating Section 230 are smaller companies. Large, established online services such as Facebook, Twitter, and Google have many incentives to address illegal and harmful content on their platforms, not the least of which being their reliance on advertising revenue. Most advertisers, especially national brands, do not want to be associated with websites known for hosting illegal activities or abuse. But there are smaller online services that profit directly from illegal or abusive third-party content—revenge-porn websites, for example—and under a size-based carve-out, they would continue to benefit from Section 230 immunity while many legitimate larger online services would not.
Finally, even if only large platforms had to do without Section 230, collateral censorship would still pose a problem. Smaller websites would have more freedom in their content moderation practices, but larger websites—the websites billions of people use daily around the world—would be more restrictive about the types of content they allow, thereby limiting free expression online. In addition, to the extent this allows smaller, more niche online services to thrive, it could further drive political polarization as people flock to like-minded online communities.
Establish Carve-Outs for Certain Types of Content or Activity
Similar to the proposal to keep Section 230 as is but create an exception for online services of a certain size, another proposal would keep Section 230 as is but create an exception for certain types of content or activity. These proposals usually target a specific form of illegal content or activity that is particularly harmful, such as sex trafficking, and would prevent online services from taking advantage of Section 230’s liability shield if they fail to remove this content or activity when they become aware of its existence on their platform.
The Allow States and Victims to Fight Online Sex Trafficking Act and the Stop Enabling Sex Traffickers Act (FOSTA-SESTA) is the most prominent example of a carve-out for a certain type of content or activity. Congress passed the amendment in 2018 in response to alleged sex trafficking taking place on classified advertising websites, particularly Backpage.[18] The amendment created an exception to Section 230’s liability shield for sex trafficking. Section 230 has always contained an exception for federal criminal law, so online services could still face federal criminal liability for facilitating sex trafficking, but after FOSTA-SESTA, online services can also face federal and state civil liability.[19]
Sen. Mark Warner (D-VA) introduced the Safeguarding Against Fraud, Exploitation, Threats, Extremism, and Consumer Harms (SAFE TECH) Act, which would add several exceptions to Section 230’s liability shield. Under the SAFE TECH Act, Section 230 would no longer apply to ads or paid content, civil rights law, stalking or harassment laws, wrongful death actions, or human rights violations abroad. Section 230 would also no longer apply when an online service fails to remove content upon receiving a court order.[20]
The risk of carving out certain types of content or activity from Section 230 is it requires online services to determine what is legal or illegal, which can lead to over-enforcement. In order to avoid liability, online services may remove forms of content that fall within a gray area. In the example of FOSTA-SESTA, many online services shut down their classified advertising or dating services that could be used to facilitate sex trafficking but were not designed to do so. Some messaging and cloud storage services also removed any adult content their users shared or stored, even if the content was legal.[21]
In addition, multiple carve-outs to Section 230’s liability shield would render the shield virtually useless. There are many forms of illegal content and activity online the government, most Internet users, and most online services would all agree are harmful: terrorist content, child sexual abuse material, drug trafficking, nonconsensual pornography, and more. But adding exceptions for all these illegal activities would subject online services to numerous lawsuits that Section 230 was designed to protect them against. This would impact not just bad actors that knowingly profit from illegal content, but also legitimate online services that make good faith efforts to keep illegal content off their platforms.
Require Notice and Takedown
There are various proposals to make Section 230’s liability protections conditional on online services meeting certain conditions. One proposal is a notice and takedown requirement, which would require online services to remove illegal content—but not necessarily content that is harmful but still legal—in a certain amount of time, or face penalties. This proposal borrows ideas from the United States’ approach to online copyright infringement, as well as some other countries’ approaches to intermediary liability. Under a notice-and-takedown approach, websites would receive liability protection for third-party content if, upon receiving a notice that the content is unlawful, they followed a set of procedures for removing it. If they failed to do so, they could be liable for the content.
Passed in 1998, the Digital Millennium Copyright Act (DMCA) established a notice-and-takedown process for addressing online copyright infringement. Under the DMCA, copyright owners can alert an online service to infringing third-party content on their platform by sending them a notice. In response to a valid notice, the service must remove the infringing content “expeditiously” in order to avoid liability. The individual who posted the content may submit a counter-notice if they believe the notice was mistaken and the content is not infringing. If the individual who filed the original notice does not take any further action within 10 days, the service must then restore access to the content.[22] A notice-and-takedown approach to intermediary liability could follow a similar process, replacing “infringing content” with “unlawful content.” Countries that have a notice-and-takedown approach to intermediary liability include New Zealand (Harmful Digital Communications Act 2015), South Africa (Electronic Communications and Transactions Act, 2002), and the United Kingdom (Defamation Act 2013).
Sen. Brian Schatz (D-HI) introduced S. 4066, the Platform Accountability and Consumer Transparency (PACT) Act, which includes a notice and takedown provision for intermediary liability. If an online service is notified of illegal content or activity on its platform and fails to remove the content or stop the activity within 24 hours, it could be liable for that content or activity.[23]
The notice-and-takedown approach has a number of shortcomings. First, online services would struggle with responding to invalid and incomplete notices. This problem exists under the DMCA, where online services occasionally receive notices from copyright holders against content that is lawful under fair use or that do not comply with the requirements for a valid notice.[24] Making these determinations can be difficult with regards to copyright infringement and would be even more difficult for other forms of potentially unlawful speech. In addition, requiring online services to remove unlawful content would do nothing about the forms of content that are harmful but legal, including hate speech, misinformation, and bullying, a key concern for many policymakers, especially on the left.
Another problem is online platforms struggle with keeping content off their platforms that they have already removed once. Users may repost the prohibited material from a new account, or they may slightly alter the content, which would require reviewing the content again. With regards to copyright, it is possible to implement a “notice-and-stay-down” policy, wherein online services use automated tools to review subsequent uploads against known infringing material. But implementing such a policy for text messages would be much more difficult, if not impossible, because of the difficulty of building systems that can automatically recognize nuances in language. Notice and takedown effectively creates a “Whac-A-Mole” problem which online services would likely struggle to keep up with.[25]
Sen. Warner (D-VA) has proposed establishing a process whereby victims of deepfakes—realistic-looking images and videos produced with artificial intelligence that portray someone doing or saying something that never actually happened—who obtain a judgment against an individual who created offending content could then give notice of this judgment to online services. Online services would then be liable under state tort law if they failed to take down the content or prevent it from being re-uploaded in the future.[26] However, there are a number of limitations to this proposal. First, this would only deal with deepfakes, and only in cases where state law provided protection for individuals. Second, obtaining a judgment against an individual may prove difficult for victims of defamatory deepfakes, especially if they are unable to identify the creator. Finally, this proposal would help individuals remove this content from some large platforms, but they would likely struggle to identify all the potential sites where someone could upload this content.
Use Liability Protection as a Bargaining Chip
Policymakers have advanced various “bargaining chip” proposals that would extend Section 230 liability protections to online services only if they made certain concessions—ranging from a potential ban on end-to-end encryption to adopting terms of service that prohibit users from posting hateful content to eliminating the use of algorithms to rank content in social media news feeds and targeted advertising based on users’ preferences and behavior. Policymakers have proposed varying requirements, but all are generally meant to establish certain minimum guidelines online services would have to implement to keep illegal and objectionable content off their platforms in order to receive liability protection. Any platforms that do not follow these rules—generally thought to be the bad actors—would not benefit from Section 230’s liability shield.
For example, former Representative Beto O’Rourke, in addition to calling for a notice-and-takedown provision in Section 230, proposed changing the law to require “large internet platforms to adopt terms of service to ban hateful activities” which would include “those that incite or engage in violence, intimidation, harassment, threats, or defamation targeting an individual or group based on their actual or perceived race, color, religion, national origin, ethnicity, immigration status, gender, gender identity, sexual orientation or disability.”[27] The goal of his proposal is to limit hate speech and the violence that results from it—such as that which came in the wake of a white supremacist shooting in El Paso, Texas.
There are three primary problems with this proposal. First, its impact is unlikely to have a significant impact because all the major social media platforms already including these types of requirements prohibiting hate speech. Second, any attempt by Congress to limit legal speech, which can include some forms of hate speech, would likely encounter First Amendment challenges. And finally, if social media platforms more aggressively enforce content moderation policies against offensive speech, they may face even more political backlash since policymakers across the political spectrum often disagree on what content should be removed or remain online. With red and blue America engaged in an increasingly hot culture war, it is difficult to imagine there will be consensus any time soon on where the boundaries should be.
As another example of a bargaining chip proposal, Sen. Josh Hawley (R-MO) introduced S. 1914, the Ending Support for Internet Censorship Act, which would require companies with over 30 million active monthly users in the United States, over 300 million worldwide active monthly users, or more than $500 million in global annual revenue to prove to the Federal Trade Commission every two years that their algorithms and content moderation practices are politically neutral in order to receive Section 230 liability protection. This proposal hearkens back to the Federal Communications Commission’s (FCC) “fairness doctrine,” which required broadcasters to present news with a balanced perspective, although the FCC abolished the fairness doctrine in 1987.[28] This proposal is one of the clearest examples of how policymakers are using Section 230 as a way to force social media platforms to adjust their content moderation policies. Notably, the proposal would not change any of the core principles of Section 230, and would only extend protections to large social media platforms only if they agree to be politically neutral, which is a characteristic that is difficult to measure.
Sen. Schatz’s PACT Act also includes bargaining chip elements, requiring online services to enact certain transparency measures and provide a complaint system for users to report content that is illegal or violates the platforms’ policies and appeal platform decisions to remove user-submitted content in order to continue benefiting from Section 230’s liability protections.[29]
Other bargaining chip proposals include H.R. 492, the Biased Algorithm Deterrence Act of 2019, introduced by Rep. Gohmert, which would eliminate Section 230 protections for any social media service that did not remove all technical measures that filter or sort user-generated content.[30] This bill would require social media sites to display all content in chronological order to receive liability protection for third-party content. Similarly, H.R. 8515, the Don’t Push My Buttons Act, introduced by Rep. Paul Gosar (R-AZ), would eliminate Section 230 protections for online services that curate the content users see based on personal data without their affirmative consent.[31] This would, however, negatively impact many of the features social media platforms offer, such as news feeds that sort stories according to what is most likely to interest users, and features that allow users to explore or discover new content that is similar to content they have liked or interacted with in the past. These features add immense value to users, whereas simply displaying content in chronological order would force users to scroll through content that does not interest them.
There are also multiple bargaining chip proposals that target online services that rely on advertising as a source of revenue. H.R. 8922, the Break Up Big Tech Act of 2020, introduced by Rep. Tulsi Gabbard (D-HI-2), would eliminate Section 230 protections for online services that sell advertisements that are displayed to users based on their preferences and behavior. It also contains similar provisions to the Biased Algorithm Deterrence Act and Don’t Push My Buttons Act that would treat online services as publishers if they display content in any order other than chronological.[32] Finally, S.4337, the Behavioral Advertising Decisions Are Downgrading Services (BAD ADS) Act, introduced by Sen. Hawley, would eliminate Section 230 protections for any online service that engages in behavioral advertising. These proposals targeting behavioral advertising fail to acknowledge the benefits of displaying ads according to users’ preferences. Not only is selling targeted ads an important source of revenue for many online services, enabling them to offer their services to users for free and to continue offering new features and innovations to the site, it also results in users seeing ads for products and services that are more likely to interest them.
Finally, Senator Lindsey Graham (R-SC) introduced a bargaining chip proposal with S. 3398, the Eliminate Abusive and Rampant Neglect of Interactive Technologies (EARN IT) Act.[33] The bill would establish a National Commission on Online Child Sexual Exploitation Prevention that would draw up a series of best practices for services to prevent online child sexual exploitation. If online services failed to follow those best practices, they would lose Section 230 protection from claims related to child sexual exploitation laws.
The original bill, introduced in March 2020, would have given the U.S. attorney general the power to add to the list of best practices services must follow in order to retain Section 230 immunity. Given that the attorney general at the time, William Barr, had a firm stance on end-to-end encryption, many tech companies and privacy and security advocates worried that he would use it as an opportunity to declare that companies that use end-to-end encryption are not following best practices to prevent child exploitation. The fact that Sen. Graham had also spoken out against end-to-end encryption increased suspicion surrounding Barr’s motivations.[34]
The EARN IT Act is a prime example of a problem with bargaining chip proposals: They are an easy way for lawmakers to pursue a secondary agenda under the guise of curtailing online crime and abuse. If Congress decides to amend Section 230 or replace it with another piece of legislation, it will need to focus solely on the issue at hand: intermediary liability. Bargaining chip proposals allow Congress to use intermediary liability legislation as a mechanism to settle unrelated issues, which it should instead resolve with separate legislation that specifically addresses those issues.
Exempt State Criminal Law
Another proposal to reforming Section 230 is to add an exemption for state criminal law. There are already a few exceptions to Section 230’s liability protections; namely, it does not apply to federal criminal and intellectual property law or to sex trafficking law.[35] Some critics, in particular a number of state attorneys general, argue that adding an exception for state criminal law would help curtail forms of online abuse that are only illegal at a state level.[36]
This reform would be a relatively small change, and would only require adding a few words to Section 230(e)(1), where it currently reads that “nothing in this section shall be construed to impair the enforcement of … any other Federal criminal statute,” and would instead state that “nothing in this section shall be construed to impair the enforcement of … any other Federal, State or Territorial criminal statute.”[37]
Proponents of this solution, which include attorneys general from the other 47 states and territories, argue that in the United States, state and federal laws complement each other.[38] The federal government is best equipped to handle some issues, but other issues are better left to the states, just as federalism intended. But critics argue this system does not work when laws such as Section 230 preempt certain state laws and create a gap in enforcement. A popular example of this gap is nonconsensual pornography, as there is currently no federal law criminalizing “revenge porn“—only state laws.[39] Since Section 230 does not apply to state criminal laws, victims cannot pursue legal recourse against revenge porn websites, only against the individuals who initially shared their information. But states also point to other issues, such as deepfakes, for which there are not federal laws; or problems such as identity theft and black-market opioid sales, wherein states play a significant rule in enforcing these laws.
There are some problems with this proposal. First, most crimes are already covered by federal law. Revenge porn is a notable exception, but it is one of the few. To the extent that there are gaps, Congress should pass federal laws to cover these areas. Second, online services would have to keep up with a patchwork of 50 different sets of state criminal laws instead of a single set of federal laws, which would be a more difficult task—although it is one that many large companies already have to contend with. Finally, with 50 different states to contend with, as well as 50 different attorneys general, the chances are much higher that one or more of them will pass a bad law that is overly burdensome on online services or takes unexpected enforcement action against an online service. Allowing states to set their own rules for online intermediary liability would allow any one state to effectively set national policy. For example, a state could make online services criminally liable for any illegal activities by users on their platforms when they have “actual knowledge” of such activity—a liability standard that has been rejected at the federal level because of the negative impact it has on services that may seek to moderate their platforms less rigorously in order to avoid liability, to the detriment of their users.
Expand Federal Criminal Laws
As an alternative to adding an exemption to Section 230 for state criminal law, Congress could expand federal criminal law to cover a wider range of illegal activity. Most online crimes are already covered by federal law, including identity theft, child pornography, cyber extortion, hacking, trafficking passwords, and online solicitation of a minor.[40] However, there are certain activities some states have outlawed but the federal government has not, including deepfakes, cyberbullying, and nonconsensual pornography. The federal government could pass laws not only around deepfakes, cyberbullying, and nonconsensual pornography, but also around foreign interference and propaganda in U.S. elections.
Expanding federal criminal law to include these activities would carry fewer negative consequences than alternative approaches: namely, adding an exception to Section 230’s liability protections for specific types of content or activity or adding an exception for state criminal law. Congress did the former when it passed FOSTA-SESTA in 2018, opening online services up to civil liability and state criminal liability for violating sex trafficking laws. As a result, Craigslist shuttered its Personals section and other websites similarly stopped offering certain services simply because those services could be misused and the websites themselves did not want to face liability for that potential misuse.[41]
As opposed to these proposals that attempt to solve the issue of certain illegal activity by creating additional exemptions to Section 230, expanding federal criminal law would address the issue by taking advantage of the existing exemption in Section 230 for federal criminal law. It would also avoid creating a patchwork of inconsistent state laws and enforcement for online services—which almost always have users in multiple states—to contend with. Finally, expanding federal criminal law would not open online services up to civil lawsuits that would carry high legal expenses.
Expanding federal criminal law would allow the federal government to prosecute online services that engaged in illegal activity—such as soliciting revenge porn—but would not hold online services accountable for the actions of criminals who misused their platforms. The latter would place an unreasonable burden on online services and perhaps even incentivize them to monitor their users’ behavior for criminal activity, a potential privacy violation.
Exempt Federal Civil Enforcement
The Department of Justice (DOJ) released its reform proposal for Section 230 in September 2020. As part of this proposal, DOJ suggested amending Section 230 to make it clear that the law’s liability shield does not apply to federal civil enforcement. This would function similarly to the exemption that already exists in Section 230 for federal criminal prosecution, allowing the U.S. federal government to go after online services that have broken federal law in both criminal and civil court.[42]
Specifically, DOJ’s proposed exemption would apply to civil action by the federal government against an online service “related to a specific instance of material or activity that, if knowingly disseminated or engaged in, would violate federal criminal law,” as long as the service had “actual notice” of the material or activity’s existence and unlawfulness and failed to remove it, report it to law enforcement where required by law, or preserve evidence of it. In such a case, an online service could not use Section 230(c)(1) as a defense against the federal government in civil court, just as it cannot use Section 230(c)(1) as a defense against the federal government in criminal court.[43]
DOJ’s argument is that federal civil enforcement complements federal criminal prosecution. In addition, its proposal to exempt federal civil enforcement is a compromise between the current law, which preempts all civil cases against online services related to third-party content (other than those already exempted by FOSTA-SESTA) and proposals that would allow private citizens to sue online services for failing to remove harmful or illegal third-party content. The latter would subject online services to countless nuisance lawsuits, while the DOJ’s proposal would only subject services to civil action from the federal government.
However, it is unclear exactly when such federal civil enforcement would be necessary. If an online service is contributing to illegal activity, such as was alleged with Backpage, then DOJ can bring criminal action against them. Enforcement agencies such as the FTC wanting to bring cases against online intermediaries, but lacking statutory authority, should be considered separately from Section 230 reform.
Eliminate the “Or Otherwise Objectionable” Clause
Another reform proposal focuses on narrowing the scope of Section 230(c)(2), which states that online providers shall not be held liable for actions taken in good faith to remove harmful third-party content. Specifically, this section affirms that providers and users are not liable for limiting access to “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”[44] In particular, critics point to the “or otherwise objectionable” phrase as being too open-ended. Rep. Gosar introduced H.R. 4027, the Stop the Censorship Act, which would strike this entire lengthy clause and replace it with the phrase “unlawful material.”[45]
Two Senate bills, S. 4534, the Online Freedom and Viewpoint Diversity Act, and S. 4632, the Online Content Policy Modernization Act, introduced by Sen. Roger Wicker (R-MS) and Sen. Graham, respectively, would similarly change the language of Section 230(c)(2). Both bills contain a provision that would replace the “otherwise objectionable” phrase with the more-specific “promoting self-harm, promoting terrorism, or unlawful.” The bills would also raise the standard for Section 230(c)(2)’s liability shield, which currently protects online services from liability for removing content they consider to meet that criteria, by instead only protecting them from liability for removing content they “have an objectively reasonable belief” meets the criteria.[46]
These proposals arose from allegations that major online platforms discriminate against and censor conservative speech. These claims have gained even more support in some circles after Facebook, Twitter, Instagram, and other platforms banned or suspended President Trump after rioters broke into the U.S. Capitol on January 6, 2021.[47] But restricting the types of content online services can remove without potentially facing liability would come with serious adverse side effects.
One of Congress’s primary intentions in passing Section 230 was to encourage good faith content moderation. To achieve this, Section 230 gives online services the freedom to moderate content in a way that best suits their users. This freedom has allowed for the development of many different types of online platforms, each experimenting with moderation policies that work best for their communities. Indeed, there is no one-size-fits-all set of content moderation policies that is appropriate for every platform, and platforms regularly update their content moderation policies based on user feedback. If enough users are dissatisfied with an online service’s content moderation, they can create demand for a new, competing service. Tightening the standard for Section 230(c)(2) would incentivize less content moderation, especially of content such as misinformation and bullying that falls into the gray area of being harmful but not illegal.[48]
Establish a Good Faith Requirement
Another proposed reform to Section 230 would be to add a good faith requirement to Section 230(c)(1). This would address the problem of bad actors—websites that knowingly host and profit from illegal or harmful material—taking advantage of Section 230 immunity. It would borrow from Section 230(c)(2), which already contains a “good faith” requirement.
Section 230(c)(2) states that online service providers are not liable for “any action voluntarily taken in good faith to restrict access to or availability of” objectionable content.[49] It applies to content that providers remove, and gives providers leeway in their content moderation decisions, as long as they act “in good faith.” A specific case of a provider not acting in good faith when removing content occurred in E-Ventures Worldwide v. Google (2016), when Google was found to have acted anticompetitively in removing E-Ventures’ listings on its search engine.[50]
Section 230(c)(1), however, applies to all content that providers do not remove, and does not contain a good faith requirement. Adding such a requirement would allow legitimate websites to continue benefiting from Section 230 protection, without shielding bad or negligent actors. Ideally, this language would be as simple as exists in 230(c)(2), leaving the interpretation of what constitutes acting in good faith to the courts. However, this should not include online services that act negligently, allowing illegal or harmful content to proliferate; purposefully profit from illegal or harmful content; or design their services in order to encourage illegal or harmful content.
A similar proposal comes from Danielle Citron and Benjamin Wittes, who proposed modifying Section 230(c)(1) to state that it only applies to a provider that “takes reasonable steps to prevent or address unlawful uses of its services.”[51] The idea of this proposal is also to eliminate immunity for bad actors. As with adding a good faith provision, this would allow providers to maintain broad liability protections provided they can prove to a court that their response is reasonable.
Sen. Hawley introduced S. 3983, Limiting Section 230 Immunity to Good Samaritans Act, which would require online services to add a good faith standard to their terms of service, with a fine of at least $5,000 for violating that standard. The bill has a size-based carve-out and would apply only to online services with over 30 million U.S. users, 300 million global users, and $1.5 billion in global revenue over a 12-month period. The bill also allows users to sue online services for violating the good faith standard in their terms of service. The bill’s definition of an online service not acting in good faith includes selectively enforcing the terms of service, failing to honor a public or private promise, or any other action taken with “fraudulent intent.”[52]
There are two main risks of a good faith requirement. First, courts may not take sufficient action against bad actors. Ideally, the Congressional Record, however, would make clear what types of bad actors Congress had in mind when it discussed online services that do not act in good faith. Second, a good faith requirement would make it significantly more difficult for online services to defend themselves against nuisance lawsuits, as not only would they have to prove that they are immune from liability under Section 230 for third-party content, but they would have to satisfy the greater burden of proving that they have acted in good faith, which would likely open up much more costly litigation.
Table 1: Impact of various Section 230 proposals