ITIF Logo
ITIF Search
Will AI Regulation “Avoid Past Mistakes” or Just Make Different Ones?

Will AI Regulation “Avoid Past Mistakes” or Just Make Different Ones?

January 30, 2025

The online world has changed rapidly over the past two decades since the mainstream popularization of social media. From MySpace reaching 1 million active monthly users in 2004 to Facebook reaching 1 billion monthly active users in 2012 to over half the world’s population using social media as of 2024, social media has changed the way people communicate, do business, form communities, learn, find entertainment, organize social and political movements, and keep up with current events. However, as part of the ongoing backlash against “Big Tech,” policymakers from both sides of the aisle and various stakeholders have raised a host of issues with social media ranging from potential harm to children to the alleged death of democracy.

When artificial intelligence (AI) entered the cultural zeitgeist with the proliferation of generative AI, and particularly with the introduction of ChatGPT in 2022, many policymakers and tech critics began to argue that the United States should learn from social media. The argument goes: The federal government did not do “enough” to regulate social media when the technology was in its infancy, leading to various real or perceived harms. In order to avoid the potential harms that could come from AI, the federal government needs to regulate now instead of waiting to see what effect AI actually has on society.

There are serious flaws in this argument. First, it relies on the assumption that regulating social media early in its development would have led to a better online world. Unfortunately, different groups disagree on what a “better world” should look like. Would it be better if social media platforms were required or incentivized to remove more forms of controversial content in the name of protecting users from potential harm, or if social media platforms were required or incentivized to do the opposite in the name of free speech? Would it be better if social media platforms could not collect user data for targeted advertising for the sake of privacy even if it meant users might need to pay a subscription fee? Would it be better if children were banned from social media to protect them from potential harm even if they lose out on important connections, communities, and educational opportunities?

Even if everyone did agree on certain goals, they often disagree on how best to achieve them. Data privacy is a prime example of this problem. Congress has still not passed federal data privacy legislation, and not for lack of trying. Debates over key issues such as private rights of action, preemption of state privacy laws, and opt-in versus opt out consent requirements have Congress gridlocked to this day. Congress should have passed a national privacy law years ago, but this is easier said than done.

Finally, assuming everyone agrees on certain goals and reaches a compromise on how to achieve them, regulation does not always lead to the “better world” policymakers intended. Often, regulations have unintended consequences. Tariffs designed to boost domestic industries can instead undermine U.S. competitiveness. Regulation designed to decrease online sex trafficking can instead put sex workers in danger. Initiatives to save local journalism can decrease consumers’ access to quality news.

Many of policymakers’ proposed social media regulations would carry consequences as well. Increased liability for social media platforms would likely lead to increased censorship. Mandating age verification for social media carries privacy and free speech implications. Policymakers still cannot agree on the best way to regulate social media. How could they have agreed 20 years ago, without the knowledge we have today?

Social media companies have made mistakes, as all companies do. Some of these mistakes have harmed users or caused controversy. Companies could have learned faster and avoided some of these mistakes, but it is much easier to play Monday morning quarterback than to make the right decision in the moment every time, without fail. Because of their mistakes, companies have continued to adapt and improve in many areas, from content moderation to children’s safety.

While technology has advanced since the advent of social media, there is still no way of looking into the future. It is just as impossible to accurately predict what an AI-enabled world will look like 20 years from now as it would have been to accurately predict today’s social media-enabled world 20 years ago. The potential economic cost of regulating AI prematurely is massive, a loss the United States cannot afford as it increasingly cedes ground to China in an industrial war spanning a range of industries. This does not even touch on the loss of social benefits from AI, including increased accessibility for people with disabilities, more effective and personalized education, and more effective and personalized healthcare, to name a few.

Regulation for regulation’s sake is never the right approach. Regulation should address specific harms that the free market is ill-equipped to address on its own. There may be specific AI-related harms that regulation could address, in which case the federal government should start by identifying those harms and the best ways to address them that balance the potential risks against AI’s potential benefits. A false equivalency between social media and AI regulation that amounts to little more than “would’ve, could’ve, should’ve” has no place in those discussions.

Back to Top