---
title: "Pre-Approval for AI Models Would Slow Innovation Without Improving Safety"
summary: |-
  Requiring government approval before releasing advanced AI models would slow innovation, politicize AI development, and weaken U.S. competitiveness. Instead, policymakers should focus on collaborative safety efforts and strengthening cybersecurity.
date: "2026-05-11"
issues: ["Artificial Intelligence", "Data Innovation", "Cybersecurity"]
authors: ["Daniel Castro"]
content_type: "Blogs"
canonical_url: "https://datainnovation.org/2026/05/pre-approval-for-ai-models-would-slow-innovation-without-improving-safety/"
---

# Pre-Approval for AI Models Would Slow Innovation Without Improving Safety

Concerns about advanced AI models—particularly systems such as [Claude Mythos](https://red.anthropic.com/2026/mythos-preview/)—are driving renewed calls in Washington to [require government review](https://www.nytimes.com/2026/05/04/technology/trump-ai-models.html) before companies can release frontier models. These concerns center largely on cybersecurity risks. As Anthropic recently stated, [frontier AI systems](https://www.anthropic.com/glasswing) may now “surpass all but the most skilled humans at finding and exploiting software vulnerabilities.” Those concerns are legitimate, but requiring government approval before releasing AI models would be the wrong response.

Recent comments from the director of the [White House National Economic Council, Kevin Hassett](https://www.youtube.com/watch?v=rbXTTjymqjo), illustrate the direction some policymakers appear to be considering. Hassett said the administration was studying a possible executive order that would require advanced AI systems to be “proven safe” before release, comparing the concept to the FDA review of pharmaceuticals. 

The White House later appeared to soften that position. [Chief of Staff Susie Wiles](https://x.com/SusieWiles47/status/2052192419718783246) emphasized that the administration was “not in the business of picking winners and losers” and said the goal was to support innovation rather than bureaucracy. That tension reflects the core problem with a pre-approval regime: Once government permission becomes necessary for release, technical decisions inevitably become political ones.

The FDA analogy is fundamentally misplaced. AI models are not pharmaceuticals. Drugs are physical interventions distributed into human bodies after controlled testing. AI models are information products, more analogous to software, publishing, or speech. The United States has long rejected systems of government pre-approval for speech and publication. That does not mean all speech is unrestricted; laws governing defamation, fraud, incitement, and other harms impose liability for misuse or unlawful conduct after publication rather than requiring prior government approval. Even the much narrower process requiring former national security officials to submit books for prepublication review has generated [years of disputes](https://thehill.com/policy/national-security/3498000-scotus-wont-take-case-of-former-national-security-officials-challenging-review-of-their-books/) over delays and restrictions on legitimate speech.

A pre-approval regime would also slow innovation. Model development cycles are often measured in months, and timing matters in a highly competitive global market. Federal review systems operate on slower and less flexible timelines. For example, safety [reviews for new aircraft](https://www.faa.gov/aircraft/air_cert/airworthiness_certification) take the Federal Aviation Administration five to nine years. Requiring firms to obtain government permission before release would introduce delays, uncertainty, and compliance costs that discourage iteration and experimentation.

A pre-approval regime would also risk politicizing AI development. In a pre-approval system, firms would face significant risk that their next-generation products could be delayed, restricted, or reshaped based on shifting political judgments. For example, the recent dispute between [Anthropic and the U.S. Department of War](https://www.courtlistener.com/docket/72379655/anthropic-pbc-v-us-department-of-war/) would have looked much different if the government could easily hold up all commercial deployments of a company’s frontier models. In such a world, AI labs would compete less on technical merit and more on regulatory positioning, distorting incentives and discouraging investment in frontier AI development.

Importantly, these restrictions would not stop global AI progress. Other countries will continue producing frontier AI models, and open-weight models released abroad will be accessible online. U.S. firms would bear the burden of compliance, while foreign competitors would face fewer constraints. As in [debates over end-to-end encryption](https://itif.org/publications/2020/07/13/why-new-calls-subvert-commercial-encryption-are-unjustified/), restricting mainstream U.S. providers would do little to prevent determined bad actors from accessing similar capabilities elsewhere. The likely result would be to weaken the U.S. AI ecosystem without meaningfully reducing global access to advanced models.

Many policymakers are also assuming that more capable AI systems are inherently creating ever-greater cybersecurity threats. But the deeper problem is that the current cybersecurity environment is already weak. Critical systems often rely on insecure code, unpatched vulnerabilities, and outdated infrastructure that malicious actors routinely exploit today without advanced AI. Frontier AI did not create insecure systems—it changed the economics of exploiting them. The appropriate response is not to indefinitely suppress AI capabilities, but to radically improve cybersecurity.

That requires a different policy approach. Congress should increase funding for the [Center for AI Standards and Innovation (CAISI)](https://www.commerce.gov/news/press-releases/2025/06/statement-us-secretary-commerce-howard-lutnick-transforming-us-ai) to evaluate risks posed by frontier models and develop targeted safeguards. Partnering with leading AI firms represents a more constructive model than sweeping pre-approval mandates. These arrangements provide the government with access to advanced systems for national security testing while preserving a collaborative relationship with industry.

The major AI developers have generally not objected to government access for evaluation purposes. Most already participated voluntarily in testing and safety initiatives. The more important question is what happens after the government gains access. One possibility is that the administration attempts to impose constraints through contractual obligations rather than formal regulation, particularly given that many frontier AI companies maintain government contracts. That approach may avoid a formal licensing regime, but it would still create uncertainty about the scope of government influence over model deployment decisions.

The contrast with Europe is also instructive. [European officials](https://www.politico.eu/article/eu-pressure-builds-on-anthropic-over-mythos-hacking-risks/) have expressed frustration about being excluded from some frontier AI testing arrangements. But that reflects a broader difference in philosophy. The [EU AI Act](https://artificialintelligenceact.eu/) is centered on compliance obligations and penalties. CAISI, at its best, is centered on collaboration, technical evaluation, and intra-industry learning. Those are fundamentally different governance models—the U.S. government is poised to be a collaborator, the EU an adversary.

The United States has a strong interest in maintaining leadership in AI while addressing genuine security risks, but a sweeping pre-approval regime would undermine both objectives. A better approach is to strengthen collaborative safety institutions, harden vulnerable systems, and develop targeted safeguards for specific risks without placing frontier AI development under a standing government permission structure.

*Image preview credit:* [*Mark Bellingham/Flickr*](https://flic.kr/p/ewcLkm)

---
*Source: Information Technology & Innovation Foundation (ITIF)*
*URL: https://datainnovation.org/2026/05/pre-approval-for-ai-models-would-slow-innovation-without-improving-safety/*