
AI Is Much More Evolutionary Than Revolutionary
We hear it all the time: Artificial intelligence is a technology like none before, presenting unique challenges to humanity’s future. If this is true, then the fact that the great technological advances of the last two hundred years have not led to widespread job losses, societal instability, or diminished human worth is much less reassuring. Indeed, the belief that AI signals an entirely new phase of human history is central to the widespread claim that the world is entering dangerous and unchartered waters. This assertion of fundamental uniqueness is the motivation behind much of the drive for stringent and global AI regulation.
But the evidence does not support the assertion, much less the further conclusion that stringent regulation is warranted. Whether we are considering the life-changing innovations of the past, the ongoing expansion of the digital technology industry, the long history of the artificial intelligence field, or even today’s debate about “narrow” AI vs. artificial general intelligence (AGI), the pattern is the same. Evolution, not revolution.
Technology historians often stress the importance of general-purpose technologies: those innovations that are used broadly across many industries and serve as a platform for important societal change. In modern times there have been a half dozen or so major ones, including electricity, fossil fuel energy, motorized machinery and transportation, telecommunications, and now digital computing. Each of these technologies improved people’s lives profoundly. And while they also came with serious challenges such as safety, pollution, traffic, and climate change, the balance was overwhelmingly positive for the way the modern world lives. Looking ahead, will AI really match the societal impact of just electricity, which gave us lighting, refrigerators, washers, dryers, air conditioners, TV, radio, telecom, computers, and much more? I very much doubt it.
Digital technology also has a long history of successful evolution. The idea that a powerful general-purpose computer could be based upon binary logic (ones and zeroes) goes back to the 1930s. Over the next 90 years, that core insight spawned the development of: large mainframe computers, software, data storage, smaller and cheaper minicomputers, microprocessors, personal computers, networking standards, the Internet, mobility, social media, cloud computing, Big Data, and now AI, which couldn’t exist without those earlier innovations. It’s been an impressive but steady evolution.
Almost from its inception, digital technology also came with fears and challenges—unchecked automation, loss of privacy, Big Brother subservience, fraud, screen addiction, dependency on machine intelligence, misinformation, super-rich entrepreneurs, and more. But once again the benefits have vastly outweighed the downsides. This seems likely to continue, especially as for most consumers the jump from the offline world to today’s mobile, social Internet has been much more life-changing than the current jump from Google search to AI prompts. Just matching the societal impact of the Internet over the 1995–2020 period will be a very high bar to reach, let alone exceed.

Similarly, AI’s rapid progress in the last few years doesn’t change the fact that its underlying ideas have been evolving for nearly a century. Alan Turing stressed the potential of algorithms in the 1930s. The mathematical logic of neural networks goes back to the 1940s. Machine learning was coined as a term and demonstrated in the 1950s. Useful expert systems were built in the 1980s. The seminal shift from symbolic and rules-based AI systems to ones based upon statistics and probability goes back to the 1990s. Google’ s DeepMind unit demonstrated the power of deep learning in 2016 when its AlphaGo program defeated Lee Sodol in Go. It’s been a long and evolutionary process. While there have been important recent improvements, the core ideas behind AI have been with us for many decades.
The reason that amazing and highly useful AI systems have only recently emerged isn’t because of any fundamental technological breakthrough; it’s the availability of zettabytes of Internet training data, much greater processing power, and inexpensive cloud computing and data storage. Inevitably, today’s impressive AI capabilities have triggered 2001: A Space Odyssey-type fears. But once again they’re not convincing. Human oversight is still needed in just about every complex field that matters—be it medicine, law, defense, software, academia, or scientific research. AI alone cannot grow our food, build our roads, or fix our plumbing.
Those who sincerely fear an AI future might concede many of these points. But they could counter that the dangers will become more obvious once AI moves from its narrow to generalized phase. But alas, that transition is also likely to be an evolutionary, not revolutionary story. While there are various tests and concepts, there is no sharply defined boundary between narrow AI and AGI, and the distinction between these two terms will surely fade away over time as AI systems become more flexible, with much improved memory, accuracy, and self-awareness. It’s unlikely that there will be any single event, development, or moment when the world suddenly agrees that AGI has arrived.
As with the technology fears of the past, AI’s risks—the unintended consequences of autonomous systems, deepfakes, control by rogue actors, et al.—will be real, but for the foreseeable future they will be manageable in the much same way that every important technology has been in the past—through evolving rules, practices, and system refinements. While it’s easy to imagine a dystopia where super intelligent and highly dexterous legions of robots dominate and revolutionize life on Earth, that’s still the realm of science fiction, where technology fears have always found a home.
In short, evolutionary technology requires evolutionary, not revolutionary responses.
Image credit: Photo via Wikimedia Commons (CC0 1.0) by user “Immigrant laborer.”
Editors’ Recommendations
Related
December 2, 2022
Slow Progress Is Taking the Fear Out of Artificial Intelligence
September 4, 2018