Skip to content
ITIF Logo
ITIF Search
Will Super Intelligent Computers Become an Existential Threat to Humanity?

Will Super Intelligent Computers Become an Existential Threat to Humanity?

July 1, 2015

Featured Image

As ITIF Vice President Daniel Castro explained at the outset of a recent ITIF event on the future of artificial intelligence (AI), we have seen significant advancement in AI in the past few years, from Google’s self-driving cars to IBM’s Watson to Apple’s Siri. At the same time, several prominent tech leaders—including Elon Musk, Bill Gates, and Stephen Hawking—have expressed concern that these advances in AI will lead to supremely intelligent machines that could pose a threat to humanity. Should policymakers actually be worried, or are their concerns hyperbole?

There was general agreement among the speakers that AI has the potential to greatly improve society, including helping to alleviate poverty and cure disease. Manuela Veloso, a professor of computer science at Carnegie Mellon University, explained that most technologies present certain risks but they are outweighed by the benefits. She advocated for additional research funding to build protections into future AI.

Some panelists expressed greater concerns over the dangers, especially if the research community does not work to address them in the near term. Nate Soares, executive director of the Machine Intelligence Research Institute, explained that artificial intelligence could be catastrophic for the human race if it is navigated poorly.

Similarly, Stuart Russell, a leading AI professor at UC Berkeley, explained that whether or not AI becomes a threat depends on the actions society takes to address this issue before it become a threat. Russell believes it will be difficult to build human values into intelligent systems to mitigate the risks. Indeed, because society cannot always agree on common values in the first place, it will be difficult to specify the right goals for machines.

Robert Atkinson, the president and founder of ITIF, agreed that work should be happening now to ensure that safety precautions are built into future development, but warned that harping on potential threats too much and outlining “doomsday scenarios” could potentially dry up funds for research or create a backlash that inevitably would halt progress on AI advancements that could generate substantial benefits.

As this research that is needed to build protections into AI continues to develop, Ronald Arkin, a regents’ professor and associate dean at the College of Computing at Georgia Tech, said the public should not consider AI in isolation and instead should think about it in the context of the types of problems it will be applied to.

The panelists disagreed somewhat over the timeline for how long it will be until machines are able to exhibit the level of super human intelligence that some worry about—anywhere from 5 to 150 years. However, they generally agreed that researchers should consider research efforts that ensure machines perform as designed and do not create unintended consequences. Sarah Connor, the protagonist in the Terminator franchise, would likely agree.

Want to learn more? Watch the event video here.

Back to Top