Without a Federal Moratorium, US AI Policy Will Fragment Further
Congress’ decision to reject a federal moratorium on state-level AI regulation is a missed opportunity. Without a pause, the United States continues to face a patchwork of state laws that confuses consumers, burdens businesses, and slows innovation. A moratorium would have given Congress time to build a coherent national framework and avoid unnecessary regulatory fragmentation.
Several state-level AI bills introduced in 2025 highlight the consequences of this disjointed approach. These proposals are meant to address concerns about bias, accountability, and transparency, but lawmakers often do not see eye-to-eye on these issues. For example, left-leaning policymakers generally want AI bias measures to promote diversity, equity, and inclusion, while right-leaning ones are more concerned about reducing ideological bias. Moreover, the bills are frequently overbroad, ambiguous, and extraterritorial in scope. Instead of targeted oversight, they impose sweeping mandates that can burden developers, particularly those operating across state lines. The result is a confusing web of regulations that creates legal uncertainty, drives up costs, and stifles innovation.
These problems are evident in New York’s Responsible AI Safety and Education (RAISE) Act (A 6453A), which targets frontier AI models. The law requires developers of these advanced models to maintain detailed documentation, conduct third-party audits, and report incidents within 72 hours. Violations incur civil penalties up to $10 million. Like most of the state bills, RAISE would apply to any AI system used in the state, meaning that out-of-state and even international developers would be subject to the law. The Act’s expansive enforcement and extraterritorial scope could impose duplicative rules on AI developers, such as requiring them to conduct additional state-specific audits and safety reporting.
Illinois’ High Impact AI Governance Principles and Disclosure Act (H 3529) mandates transparency measures on companies that deploy high-impact AI systems. Businesses with 10 or more employees must publish annual (or updated) reports detailing how their systems align with five governance principles: safety, transparency, accountability, fairness, and contestability. These reports must also explain technical details about system design, data sources, risk mitigation strategies, and impact assessments—presented in both plain language for the public and more detailed versions for specialists. However, the law provides little clarity on how developers should interpret or operationalize these principles. In the absence of clear standards, businesses must navigate compliance on their own, increasing legal and administrative burden. The result is a sweeping disclosure regime that overregulates without offering the specificity developers need.
California’s SB 420 further underscores the risks of regulatory fragmentation. The bill requires any developer or deployer of a high-risk automated decision system (ADS)—which includes those using AI, as well as those using less advanced data analytics or statistical models—to conduct detailed impact assessments before releasing their systems to the public. These assessments must document intended uses, potential risks of discrimination or harm, and mitigation strategies. While designed to prevent harmful and discriminatory outcomes, the bill’s broad definition of “high-risk” and extensive pre-deployment requirements create significant hurdles and costs. Like the New York and Illinois proposals, SB 420’s wide scope risks influencing compliance strategies at the national scale, in the absence of federal standards.
Together, these laws reflect a troubling trend: the emergence of expansive and inconsistent AI laws with national consequences. The issue isn’t with regulating AI, but that many state laws vary widely in substance, scope, and coherence, making them difficult to scale to the national level. A startup hoping to launch a new AI product must now adhere to dozens of state frameworks, each with its own definitions, requirements, and penalties. Large corporations may manage the added complexity, but smaller firms often cannot, and even the large ones may find themselves building out large legal teams instead of investing more in engineering, potentially facing contradictory rules. This limits competition and slows innovation, ultimately imposing greater barriers to responsible AI development.
The state-by-state regulatory framework is also a direct threat to America’s ability to compete globally. China, for example, regulates AI at the national level through centralized standards issued by central-level organizations like the Cyberspace Administration of China (CAC). The CAC’s 2025 Measures for Labeling AI-Generated Content apply uniformly to all providers, offering a single, predictable compliance framework. While China’s approach is more top-down, it gives companies a clear rulebook, unlike the U.S., where developers face a growing list of state-specific rules.
A federal moratorium on state-level AI laws would have helped avoid this outcome. Congress could have used the moratorium to consult stakeholders, evaluate the benefits and risks of various regulatory approaches, and design a national framework that protects consumers without unnecessarily restricting innovation. Instead, developers are left to navigate a confusing legal environment, which may slow the rollout of beneficial technologies—particularly in sectors such as healthcare and government services, where responsible deployment of AI could generate significant public benefit.
Looking ahead, Congress should still consider a federal moratorium on state AI laws—particularly those that impact other states—for as long as politically feasible. States like California and Massachusetts do not want Texas or Wyoming shaping their AI rules—and vice versa. The ultimate goal is not to block AI regulation, but to effectively coordinate it at the federal level. The United States needs a unified approach to AI governance that fosters innovation and establishes regulatory clarity across state lines.