Slow Progress Is Taking the Fear Out of Artificial Intelligence
In March of 2016, the AlphaGo system developed by Google’s DeepMind unit defeated the highly ranked Go player, Lee Sedol, four games to one. Although IBM’s Deep Blue system had defeated the chess grandmaster Gary Kasparov two games to one (with three draws) a decade earlier, Google’s victory was seen as much more significant. This wasn’t because Go is considered an even more complex game than chess; it was because the Google system used previous Go games and deep-learning technology to essentially train itself. In contrast, IBM took a traditional “expert system” approach, relying on functions and processes hard coded by humans.
After decades of disappointment, the artificial intelligence community had found a formula for broad-based innovation. There were three main components: 1) The Internet provided the vast amounts of data needed for self-training, whether it was for playing Go, recognizing images, translating languages, or countless other tasks; 2) the emergence of cloud computing meant that the required processing power was now readily and cheaply available on demand, and 3) the World Wide Web enabled new services to be deployed quickly and globally to businesses and consumers alike. In contrast, previous AI efforts lacked all three. There was insufficient data, expensive computing, and narrowly deployed applications.
This new AI innovation model triggered a great deal of excitement, with many forecasting imminent breakthroughs in chemistry, biology and other scientific fields, self-driving cars, automated customer service, software agents, expert systems in health care, law and financial services, speech, voice and image recognition, personalized education, improved security and all manner of smart products, predictive analytics and complex problem solving. Today, so-called “generative AI” is even pushing technology into writing, design, the arts, and other frontiers of human creativity, while Elon Musk remains bullish on mass-produced human-shaped robots.
Yet despite significant progress and many useful applications such as language translation, facial and speech recognition, and personalized recommendation engines, truly disruptive advances have yet to occur. The two most prominent AI winners thus far have been Nvidia and, arguably, TikTok. The former provides specialized chips to the AI industry; the latter uses AI extensively to enhance its social media services; however, TikTok’s main consumer appeal is its easy-to-use set of video creation tools. Neither company is the societal or business gamechanger the AI world has envisioned. Although Google is often considered the world’s AI leader, the company’s core products—Search, Gmail, Android, Maps, YouTube, Chrome, and Docs—all began well before the deep-learning era. Similarly, although AI-based services such as Amazon’s Alexa and Apple’s Siri are impressive, their overall market impact pales compared to relatively low-tech products such as Microsoft Office or Web browsers.
These recognitions are not meant as a criticism of AI or a dismissal of its potential; they are merely an acknowledgement that the pace of change has proved to be much more evolutionary than revolutionary. From an historical perspective, whether game-changing new AI capabilities take 10 or 30 years to fully emerge isn’t a big deal, but for today’s business leaders, policymakers, and the public at large, evolutionary change requires a corresponding mindset.
Revolutionary Fears
The potential downsides of AI have followed a similar arc. Ever since the breakthroughs of 2016, scary predictions have been at least as widespread as the optimistic ones. This wasn’t surprising. From Dr. Frankenstein’s monster to HAL in the movie 2001, there have always been warnings that human-like inventions would spin dangerously out of control. At least a dozen fears have been particularly prominent:
1. AI-based automation will eliminate millions of white- and blue-collar jobs.
2. AI systems are inherently biased and discriminatory.
3. AI systems and algorithms are unaccountable and unexplainable.
4. AI will destroy privacy and lead to a surveillance state.
5. AI will lead to further increases in societal inequality.
6. AI-based deep fakes will confuse and disrupt politics and society.
7. Split-second autonomous AI systems and weaponry will destabilize international relations.
8. Hostile powers will seek to dominate the world through AI.
9. AI lacks human values and ethics.
10. General AI intelligence will soon surpass that of humans.
11. AI will diminish human worth.
12. AI systems will go rogue, take control of society, and make humans expendable.
Evolutionary Solutions
As with its benefits, AI’s dangers have also been greatly exaggerated thus far. But the 12 fears above can help us see how issues become much less scary once their impact is spread out over a sufficient number of years. Consider the way that the luxury of time makes coping with AI more manageable:
▪ We can wait and see how jobs change, and whether there will be a surplus or a shortage of workers.
▪ AI biases can be corrected over time by improving the underlying data sets.
▪ Organizations that develop and deploy AI systems will surely be held accountable for them.
▪ It’s America’s choice whether to become more like China’s surveillance state or not.
▪ Only time will tell whether AI is a major driver of income inequality.
▪ Media technologists can develop ways to identify and label deep fakes and people can become more skeptical viewers.
▪ Just as the major nuclear powers have hot lines to manage crisis situations, they will hopefully develop ways to control autonomous weaponry.
▪ Values and ethics can be built into many AI systems and applications if so desired.
▪ Given the way research is shared globally, it may be impossible for any country to maintain a decisive AI edge.
▪ Generalized machine intelligence that surpasses humans won’t happen for a very long time, if ever.
▪ Even though computers are now superior, humans still greatly value winning at chess and Go.
▪ The idea that AI systems will turn against humans may well remain the stuff of science fiction.
Covid-19 has demonstrated the risks of having to make major decisions rapidly and under great pressure, which is one reason why policymakers should welcome AI’s much more evolutionary pace. Five or 10 years from now, we’ll have a much better sense of which of these areas are real problems and which are not. Clearly, it will be much easier to focus on a few actual and observable challenges than to try anticipating what will happen across a wide range of complex AI domains. The recent collapse of tech stock prices, widespread layoffs at Facebook, Amazon, and elsewhere, the confusion at Twitter, and the scandal of FTX have also tended to make curbing AI seem less urgent.
From AI to MI
The term “artificial intelligence” emerged in the mid-1950s and is generally credited to Stanford’s John McCarthy who defined AI as “the science and engineering of making intelligent machines.”[1] While McCarthy’s contributions were deep and many, his use of the term AI was unfortunate. There is nothing artificial about the idea of building intelligent machines, just as there is nothing artificial about the strength of a tractor—and despite nearly 200 years of industrial machine development, human strength and dexterity have not lost their importance. This isn’t just quibbling. By using the term artificial intelligence, we inevitably set up unhelpful comparisons and competitions with the human brain, and all that this implies.
Looking back, the term “machine intelligence” (MI) would have been much better as it reflects the fact that human and machine capabilities are fundamentally different. Computers are great for high-volume calculations, manipulations, and repetitive tasks, but are still pretty stupid in general situations; the human brain is closer to the opposite. Perhaps someday, the terminology will evolve. After all, the digital world routinely talks about “machine learning,” yet never uses the term “artificial learning,” because that phrase sounds absurd on its face. If you can have machine learning, you should have machine intelligence, just as we have human learning and human intelligence. If Silicon Valley and China started talking about MI the rest of the world would eventually follow suit, and the use of this more accurate language would make for less fraught discussions.
Like many of the critiques of digital technology, the current AI situation can be summed up as: tangible benefits, speculative fears. Although serious problems may eventually emerge, today’s complaints are reminiscent of a famous scene in Godfather II. When Michael Corleone protests the murders and reprisals his criminal organization has faced, his fellow mob boss, Hyman Roth, will have none of it, telling Michael that: “This is the business we have chosen.” For anyone who believes in the long-term potential of information technology, the opportunities and challenges of AI will increasingly define the business we have chosen. Competition will continue to push innovation forward, as will the deep-rooted human desire to discover what is possible. Given today’s manageable rate of change, there’s no need to put on the brakes, and every reason to step on the gas, at least for now.
About This Series
ITIF’s “Defending Digital” series examines popular criticisms, complaints, and policy indictments against the tech industry to assess their validity, correct factual errors, and debunk outright myths. Our goal in this series is not to defend tech reflexively or categorically, but to scrutinize widely echoed claims that are driving the most consequential debates in tech policy. Before enacting new laws and regulations, it’s important to ask: Do these claims hold water?
About the Author
David Moschella is a non-resident senior fellow at ITIF. Previously, he was head of research at the Leading Edge Forum, where he explored the global impact of digital technologies, with a particular focus on disruptive business models, industry restructuring and machine intelligence. Before that, David was the worldwide research director for IDC, the largest market analysis firm in the information technology industry. His books include Seeing Digital—A Visual Guide to the Industries, Organizations, and Careers of the 2020s (DXC, 2018), Customer-Driven IT (Harvard Business School Press, 2003), and Waves of Power (Amacom, 1997).
About ITIF
The Information Technology and Innovation Foundation (ITIF) is an independent, nonprofit, nonpartisan research and educational institute focusing on the intersection of technological innovation and public policy. Recognized by its peers in the think tank community as the global center of excellence for science and technology policy, ITIF’s mission is to formulate and promote policy solutions that accelerate innovation and boost productivity to spur growth, opportunity, and progress. For more information, visit us at www.itif.org.
Endnote
[1]. John McCarthy, “What is AI?/Basic Questions,” website accessed November 30, 2022, http://jmc.stanford.edu/artificial-intelligence/what-is-ai/index.html.