ITIF Logo
ITIF Search
Algorithms Are Not the Enemy

Algorithms Are Not the Enemy

December 8, 2022

Regulating social media has been near the top of policymakers’ to-do lists, and a theme has emerged among many of the bills facing Congress: punishing or disincentivizing the use of algorithms. While well-intentioned, these bills that are designed to make the Internet a better, safer place could just as easily end up doing the opposite.

This approach is the result of critics who argue that social media platforms’ algorithms may be detrimental to users’ mental health and lead to increased political polarization or even radicalization. Central to many of these arguments is the claim that social media platforms amplify controversial or harmful content, either as part of a deliberate scheme to capture users’ attention and keep them on the platform or as a side-effect of algorithms automatically amplifying content that users interact with more, including content that incites negative reactions.

There are examples of harm taking place on social media platforms, where algorithms sometimes do amplify controversial or harmful content. But banning algorithms, or disincentivizing their use, would strip away all the benefits of algorithms for social media users. A mandated return to the early days of the Internet when information was exclusively shown in chronological or another non-optimized order may appeal to some nostalgic users in some cases, but in many cases it would be inconvenient to users and detrimental to individuals that rely on social media to make a living and businesses that use social media to reach consumers.

Additionally, the focus on social media algorithms ignores the bigger picture, a common theme in certain Internet policy debates. Many online services outside the realm of social media use algorithms to rank and order content in a way that is convenient for their users. Streaming services use algorithms to recommend media to their users based on factors like what they have enjoyed in the past or what is popular among other users. Online marketplaces use algorithms to recommend products in a similar way. Many different types of online services use search algorithms that sort search results by relevance.

Some critics have even called for banning certain forms of algorithmic recommendation, such as recommending content to children or recommending news content. Congress’ attempts to regulate algorithms do not go so far as to ban them. However, a few bills would heavily disincentivize the use of algorithms.

For example, the Protecting Americans from Dangerous Algorithms Act would amend Section 230 of the Communications Decency Act to make online services liable for third-party content if they use algorithms to “rank, order, promote, recommend, amplify, or similarly alter the delivery or display of information.” To avoid costly lawsuits, online services would either have to stop using algorithms entirely or change the way their algorithms work to avoid promoting any content that may be harmful or controversial. The former would cut users off from the benefits of algorithms for easily finding relevant, interesting content. The latter could end up suppressing relatively harmless content, including political discourse.

Several privacy bills also include provisions related to algorithms. Most notably, the Kids Online Safety Act requires online services used, or “reasonably likely” to be used, by minors—a very broad definition that covers a wide swath of the Internet—to explain how they use their algorithmic recommendation systems and provide minors and their parents the ability to modify the results or opt out of these recommendations. Despite the common refrain that “every child is unique,” legislators want online services to treat everyone the same. For some online services, these proposed legislative requirements may not be feasible or may be very difficult to accomplish. Overall, it demonstrates a lack of understanding of just how ubiquitous algorithms are online and their value to consumers.

Any law that fails to account for the benefits of algorithms and their many uses outside social media runs the risk of causing more problems than it solves, especially laws that would punish or disincentivize the use of algorithms. Congress should take a more targeted approach to solving the problems of online privacy and content moderation that addresses specific harms instead of taking aim at one of the pillars of the modern Internet.

Unfortunately, it may not be Congress that deals a fatal blow to the modern Internet, as the Supreme Court rules on an upcoming caseGonzalez v. Google—that will determine whether algorithmic recommendation is covered by Section 230’s liability protections. Ideally, the Supreme Court will adhere to the decades of court precedent on Section 230 and online services can continue to experiment with how to use algorithms to benefit their users and provide better experiences, while minimizing potential harm.

Back to Top