ITIF Logo
ITIF Search
No, We Aren’t in an Oppenheimer Moment for AI

No, We Aren’t in an Oppenheimer Moment for AI

July 28, 2023

Oppenheimer, the new box office hit about the physicist who helped create the atomic bomb, has many people drawing parallels between the control of nuclear weapons and the calls to curb AI. Policymakers are in an “Oppenheimer moment” with AI, and as Alexander Karp, the CEO of Palantir Technologies writes in a recent New York Times op-ed, they must decide “whether to rein in or even halt the development of the most advanced forms of artificial intelligence” or “to allow more unfettered experimentation with a technology that has the potential to shape the international politics of this century in the way nuclear arms shaped the last one.”

While the op-ed ultimately makes some reasonable arguments—the United States shouldn’t stop developing AI because its adversaries won’t and U.S. tech companies should work more closely with the U.S. government—it does so by leaning into a flawed analogy that suggests recent AI advancements are on par with the creation of the atomic bomb. But developing and deploying powerful AI systems is not like developing and deploying the atomic bomb.

One crucial difference is the complexity and diversity of potential outcomes with AI. Atomic bombs are inherently catastrophic, which means using them has a straightforward and extreme binary outcome: when detonated, they cause immense destruction and mass harm, resulting in devastating consequences. If they aren’t detonated, the world remains the same. But AI is not inherently dangerous and the outcomes of using it are multifaceted, dynamic, and context-dependent. On one hand, yes, there is a risk that AI-enabled autonomous vehicles, like human-operated cars, may crash and kill, but they can autonomously deliver food to conflict or disaster zones like the United Nations is doing in South Sudan; powerful large language models (LLMs) could potentially suggest ways to design biological or chemical weapons, but these models can help design much-needed drugs faster and more cheaply than before; and AI-enabled deepfakes can be used to misinform people, but they can also simulate complex medical conditions for better diagnosis, facilitate immersive learning experiences, and make TV and film more accessible to a wider audience.

Another difference is in the capabilities of these technologies. The atomic bomb was a remarkable scientific achievement of the Manhattan Project, a U.S.-led top-secret government project, to harness the tremendous energy of splitting the atom. The achievements to harness AI have no doubt been significant—it has passed the Turing test for its ability to mimic human-like intelligence—but the prevailing inclination to not only anthropomorphize these systems, but assume superintelligence is just around the corner, wrongly suggests that current AI systems are but a few development cycles away from creating Skynet. The reality is neither dystopia nor utopia: what AI systems can do is still very limited. Society is still likely decades or more away from widespread adoption of AI in areas that have seen tremendous research and investment, like self-driving vehicles, so talk of AI as an existential risk to humanity is still entirely speculative.

Policymakers don’t have an all-or-nothing decision to make, rather their task is to carefully craft policies that strike the right balance between mitigating potential risks and maximizing the economic and social benefits of AI. Right now, they are unfortunately only focused on the first part: mitigating risk. To be clear, this part is important, and some of the approaches Congress and the administration are taking so far is encouraging, such as the development of the NIST AI Risk Management Framework. Indeed, Senator Schumer’s proposal for AI legislation is on the right track and the White House’s announcement last week that it is working collaboratively with the private sector to manage AI risks through voluntary commitments and independent assessments, including red teaming, is heartening. But the government is not doing nearly enough on the other part: bringing the benefits of AI to life.

These benefits will not be realized without a multipronged national AI adoption strategy to ensure opportunities are translated into all the areas where they can make a positive difference in people’s lives. It is critical that Congress focus on crafting policies that accelerate the public-sector adoption of AI by addressing challenges related to acquisition, funding, and oversight as well as industry adoption of AI by supporting sector-specific AI strategies. And the Biden administration, which is currently developing a national AI strategy, should focus on innovation.

The impulse to force lessons from Oppenheimer may prove hard to resist, but if policymakers are going to do that they should apply the same principle to the film’s box office pair—Barbie. Barbie suggests that dreaming big, challenging the status quo, and pursuing equality can lead to progress. Likewise, the possibilities for AI are limitless and policymakers should be striving to encourage diverse applications, promote inclusivity, and empower future generations to solve global challenges.

Back to Top