National Security Reminds Policymakers What Is at Stake for the United States in the Global AI Race
On October 24, 2024, President Biden signed a National Security Memorandum governing the use of AI for national security, an action the administration had been working on in response to a directive in last year’s executive order on AI. The memo outlines three main objectives, each accompanied by specific action items. Overall, it demonstrates that the Biden administration takes the threat of the United States losing the global AI race seriously and recognizes the serious repercussions of falling behind for national security. It serves as a reminder to policymakers about what is at stake.
The memo is chock full of policy prescriptions. The first objective is to establish the United States as a leader in safe, secure, and trustworthy AI. It prioritizes attracting skilled AI talent as a national security concern, directs multiple agencies to leverage legal authorities to bring AI experts to the U.S., and mandates an assessment of the U.S. AI talent market. Additionally, it calls for an economic analysis of the U.S. AI ecosystem and emphasizes the development of computational resources and infrastructure for AI, including support from the Department of Energy’s national labs and the National Science Foundation’s National AI Research Resource (NAIRR).
The memo also makes protecting U.S. AI from foreign intelligence threats a national priority. It directs relevant agencies to improve the identification and assessment of foreign threats to U.S. AI, pinpoint critical nodes in the AI supply chain and potential vulnerabilities, and strengthen oversight of foreign investments in, and acquisitions of, critical U.S. AI technologies.
Finally, it establishes the Commerce Department's AI Safety Institute (AISI) as the primary hub for pre- and post-deployment testing of frontier AI models. The memo also calls for the development of specific guidance for testing AI capabilities in sensitive areas such as cyber, nuclear, biological, and chemical domains, and directs agencies to prioritize research on AI safety and security. The actions outlined in this first objective are the strongest of the memo and showcase how the federal government can play a starring role in supporting and protecting U.S. innovation.
Ironically, one of the biggest threats to U.S. leadership in AI—and consequently to national security—comes not from foreign actors but from U.S. regulators. The oversight of this memo does not address this risk directly. In particular, efforts by antitrust regulators to break up leading U.S. tech companies and investigate U.S. AI chipmakers would hurt U.S. competitiveness in AI and help strategic competitors like China pull ahead. Unfortunately, the administration seems to have its head in the sand on this issue. Still, hopefully, continued dialogue with the National Security Council will bring it to the forefront in the future.
The second objective is to harness AI for national security purposes. Key actions include establishing new hiring practices for AI talent, streamlining procurement processes, developing governance frameworks, and fostering international partnerships. While these initial reforms—recruiting talent and reforming procurement—are necessary, they address long-standing, deeply entrenched problems within the government, and it remains unclear how agencies will resolve these issues this time. After all, if agencies could fix these problems for AI, why not also for other IT areas, such as cybersecurity or cloud computing?
Developing governmentwide rules and guidance for the use of AI in national security systems is a reasonable and responsible step that should help streamline the adoption of AI for many agencies. Finally, the memo outlines opportunities for international collaboration, including studying the feasibility of co-developing AI with allies, identifying potential partners and forums for collaboration, and creating joint development and testing. Engaging more allies in a U.S. AI coalition will be essential for the United States and its partners to remain competitive with China in the AI landscape.
The third objective is to advance international AI governance that respects democratic values and human rights. The primary task for the Department of State is to develop a strategy to promote global AI safety and democratic values while preventing misuse, particularly in national security contexts. This strategy should align with existing international frameworks, such as the G7 and UN guidelines, and promote shared definitions, norms, and standards for responsible AI development and deployment.
However, this section of the memo is the weakest and most underdeveloped in both vision and tactics. The memo's call for international AI governance rings hollow given current geopolitical realities. Key allies like the EU are already implementing stricter AI regulations that could undermine U.S. competitiveness, while strategic competitors like China are unlikely to be swayed by American proclamations. Additionally, attempts to influence the Global South through normative frameworks are overshadowed by China’s economic diplomacy and infrastructure investments. Consequently, this part of the memo appears to be more aspirational than practical.
National security considerations are often the driving force behind pragmatic policymaking, and the memo’s emphasis on boosting the U.S. AI ecosystem is both welcome and worthwhile. However, it also reflects an ongoing tension within the Biden administration, which has divided its efforts on AI between quantifiable technical benchmarks and evidence-based standards—such as the AI Risk Management Framework—and abstract social principle, like the feel-good platitudes around bias and fairness offered in the Blueprint for an AI Bill of Rights.
As new policymakers assume control next year—in both Congress and the administration—they should recognize the importance of U.S. success in AI for national security and prioritize policies that sustain U.S. leadership in this sector.
Image Credits: Andrew Harnik/Getty Images