ITIF Logo
ITIF Search
Congress Should Preempt Onslaught of State AI Laws

Congress Should Preempt Onslaught of State AI Laws

May 7, 2025

States are racing to regulate artificial intelligence, creating a patchwork of laws that threatens to slow innovation, drive up compliance costs, and undermine U.S. global competitiveness. In 2023, the National Conference of State Legislatures tracked over 450 AI-related bills introduced across all 50 states, as well as in Puerto Rico, the Virgin Islands, and Washington, D.C. While some legislation appropriately focuses on state-specific concerns—such as AI use in public education, local government, or election communications—many states are reaching far beyond their borders, proposing sweeping rules that aim to govern AI more broadly.

Colorado, for example, recently enacted a law requiring companies developing and deploying high-risk AI systems to take steps to minimize potential harm. California appeared to pull back when Governor Gavin Newsom vetoed a state AI safety bill, but that was more optics than substance. The California Privacy Protection Agency has continued to develop rules on automated decision-making technologies under the California Consumer Privacy Act, asserting broad regulatory authority over AI through privacy law. In short, even states that appear cautious are continuing to shape the AI regulatory landscape in significant ways.

This rush of individual states to regulate AI mirrors a chaotic intersection where every driver assumes the right of way. Some states barrel forward with aggressive restrictions, others inch along with vague proposals, and a few hit the brakes altogether. The result is a tangled regulatory environment in which innovators struggle to navigate a maze of conflicting mandates, duplicative obligations, and unclear enforcement risks. Companies face rising legal uncertainty as they try to comply with dozens of different and evolving standards—often with little clarity on how rules will be interpreted or enforced.

The trend is accelerating. By March 2025, lawmakers had already introduced more than 550 AI-related bills this session—on track to set another record. If this continues unchecked, the United States will end up with a web of inconsistent laws that fragment national policy, delay innovation, and create legal and technical barriers to scaling AI systems across state lines.

This rush of state-level legislation also strains the already limited pool of AI policy experts. Researchers, technical experts, and startups—whose input is essential for crafting effective, evidence-based rules—cannot meaningfully engage with hundreds of legislative processes happening in parallel across the country. Instead of enabling thoughtful policymaking, this overload disperses expertise thinly, reducing the quality and consistency of the resulting laws.

This scenario poses a real threat to American leadership in AI. Unlike the United States, China has adopted a centralized national strategy to become the global leader in AI, aligning government, industry, and research under a unified mandate. In contrast, a fragmented regulatory environment in the United States risks bogging down startups and established companies alike in complex compliance burdens. When each state sets its own rules about how AI models must be tested, deployed, or labeled, it becomes nearly impossible to operate at a national scale—let alone compete globally.

Congress should act to preempt state laws that impose broad requirements on the development and use of AI. This is not about eliminating all state involvement, especially in areas that are clearly local in scope. But when state laws affect interstate commerce—by dictating how AI is built, used, or governed nationally—federal leadership is essential. A consistent national framework would reduce complexity and promote innovation, while still allowing for targeted safeguards where needed. A federal process would also allow for more focused, high-quality engagement from stakeholders, producing smarter policy with fewer unintended consequences.

Critically, the United States should not be pressured into rushing out sweeping AI laws simply to match the EU’s pace. The EU has taken pride in being first with its comprehensive AI Act, but that regulatory race may prove to be self-defeating. Overly broad mandates risk stifling beneficial technologies before they mature, and in the process, they discourage the kind of experimentation and iteration that drives progress.

The United States already has robust laws governing consumer safety, discrimination, and more—many of which apply to AI today. Rather than reinvent the wheel, Congress should focus on identifying specific, concrete harms that are not addressed under existing frameworks and develop narrow, effective legislative responses.

That’s the approach reflected in recent bipartisan legislation like the Take It Down Act, which makes it illegal to knowingly share AI-generated sexual abuse material. Introduced by Senators Ted Cruz (R-TX) and Amy Klobuchar (D-MN) with support from First Lady Melania Trump, the law targets a clear and serious abuse of AI. Focused interventions like this protect the public without dragging the broader AI economy into unnecessary red tape.

AI has enormous potential to improve lives, strengthen the economy, and advance national priorities—but only if innovators have the freedom and clarity to build at scale. Congress should act to replace today’s patchwork of conflicting state laws with a cohesive national approach—one that avoids premature overregulation and keeps America at the forefront of global AI leadership.

Back to Top