EU Policymakers Should Ignore AI Concern Trolls
The Future of Life Institute, a non-profit organization focused on what it sees as existential threats facing humanity, released an open letter, signed by various scientists and researchers, warning EU policymakers to beware of calls for “weakening regulation and downplaying potential risks related to AI.” This is a popular tactic by many AI alarmists—rather than join the ranks of those opposing the technology outright, they paint themselves as advocates of AI who are merely expressing legitimate concerns. But the rationale for their opposition is as flawed as those who proudly wear the mantle of a neo-Luddite, and if policymakers want the EU to succeed in the digital economy, they should not heed the advice of these AI concern trolls.
One of their primary concerns is that AI will become “more capable, more flexible, more general, more continually learning—in short, more intelligent.” But the odds of the current vastly limited AI technologies become anywhere near “intelligent” in the foreseeable future is extremely low. As AI scientist Gary Marcus writes, “A growing body of evidence shows that state-of-the-art models learn to exploit spurious statistical patterns in datasets…instead of learning meaning in the flexible and generalizable way that humans do.” Likewise, AI scientist Pedro Domingos writes, “Computers can do many narrow tasks very well, but they still have no common sense, and no one really knows how to teach it to them.” In other words, there is no “intelligence” in these systems; only statistical correlation. “The Terminator” is not right around the corner.
Even if the AI alarmists were right and AI advancements are proceeding rapidly, this is exactly what policymakers should welcome, and indeed, the goal of the EU’s AI policy is to improve the state of the art. Those suggesting that advancements in AI are a threat undermine public support for the technology. If the signers of the Future of Life letter really believe that AI is such a threat, the responsible thing to do is to call for government to halt all funding of AI research at universities.
Another of their unfounded fears is that AI will be a “substitute for humans” and will destroy jobs. When discussing economics, it’s usually a good idea to listen to economists, not computer scientists. Economic progress, especially since the late 1700s has been based on the development of technologies that substitute for humans and destroy jobs. Technological innovation produces higher-productivity, higher-paying jobs: This is why Europeans are ten times as prosperous as they were a century ago. As an OECD study notes, historically, “technological progress has been accompanied not only by higher output and productivity, but also by higher overall employment.” Moreover, these fears, or what techno-optimists would call hopes, are vastly overstated. AI’s capabilities remain relatively limitedcompared to humans. Moreover, this sort of narrative is at its heart highly elitist: None of the signers of the letter work physically hard, demanding, low-wage jobs—like picking crops—that AI might potentially relieve humanity from doing. They all likely make significantly more than the EU median wage and don’t have to worry about finding ways to boost productivity and living standards.
Finally, AI detractors say they worry about existing structures not being sufficient for AI governance and regulation. But this premise overlooks the EU’s extensive body of existing product safety and liability legislation. These frameworks, to which all businesses are subjected to and already comply with, are relevant to most AI systems, both products and services, that are currently in the marketplace. Moreover, laws still determine the legality of different practices, and businesses cannot circumvent laws on issues like discrimination or privacy simply by using AI. The invention of mainframe computers did not require the creation of special mainframe computer laws. Rather, where needed, policymakers adjusted existing laws and regulations. AI is no different.
AI detractors have little faith in the future. They believe the potential risks of AI outweigh the benefits and thus call for the EU to adhere to the precautionary principle—the idea that new technologies should be heavily regulated and governments should proceed cautiously because it is better to be safe than sorry. But this ignores the overwhelming examples of how AI is having a positive impact in many areas of the economy and society. If EU policymakers want to foster rapid development and adoption of AI they should follow the innovation principle which says that when technological innovations benefit society and pose modest and not irreversible risks, government’s role should be to pave the way for widespread innovation while building guardrails, where necessary, to limit harms.
The signers advance the “if you don’t regulate now, it’s too late” theory, writing that the world is in “the early days of AI, and the choices we make over the next decade will crucially shape its place in and relation to society.” But this is simply wrong. There is very little about the evolution of this technology that means that current developments will be set in stone, immutable to later desires by policymakers to modify. They use the analogy of the Internet to press for heavy handed, innovation-limiting regulations now. But the Internet was rightly a largely regulatory-free zone for at least two decades until its uses became clearer. Governments had plenty of time to act. And the idea that policymakers would know how to regulate Internet applications like social media in 1995, before social media even existed, assumes government has supernatural powers of prediction.
Giving credence to unwarranted AI fears contributes to limiting public support for the technology, disincentivizing investment in AI research, development, and adoption, and slowing down the pace of innovation—a disservice to the European Commission’s core objective to make Europe a leader in AI. Policymakers will continue to receive feedback from many different stakeholders, but they should recognize that those who claim to be champions of AI while also opposing its very progress and sometimes even its mere existence are not the best source of advice on how to craft reasonable laws and regulations.
Finally, policymakers should be wary of organizations or individuals who present their views as noble and uninterested while painting their opponents positions as self-interested rent seeking as they do when they write the “Commission will undoubtedly receive detailed feedback from many corporations, industry groups, and think tanks representing their own and others’ interests.” Perhaps these scientist advocates simply cannot conceive that an organization with different views on AI policy has different views because they have different goals (accelerating innovation and growth in living standards) and a different analysis of the facts. And policymakers should not be taken in by their claims that “as experts” that these scientists speak for all in the AI community. In most fields of science and engineering, especially the closer one gets to policy questions, the idea of a consensus among “scientists” disappears. But as policy advocates, they know that presenting their ideologically-driven opinions as objective facts held by all AI scientists presented as an antidote against corrupt corporations and their equally corrupt hand-maiden think tanks is a morally persuasive argument. The Commission should not fall for such a rhetorical tactic and should review the evidence based on logic, not emotion.