Skip to content
ITIF Logo
ITIF Search
New York’s AI Safety Law Claims National Alignment but Delivers Fragmentation

New York’s AI Safety Law Claims National Alignment but Delivers Fragmentation

January 7, 2026

New York Governor Kathy Hochul signed a new AI safety law in the final weeks of December, arguing that it aligns with California’s approach and moves the United States closer to a unified framework for regulating advanced AI. The governor is leaning hard on surface similarities—shared definitions, thresholds, and language about catastrophic risk—to cast New York’s move as evidence that states are heading in the same direction and creating a unified national AI framework. What that framing obscures, however, is that the differences between the two laws are doing far more to widen the gap between state AI regimes and pull the country further away from a coherent, pro-innovation national approach than their similarities are doing to bring it together.

The case for similarity is easy to make. Like California’s SB 53 (the Transparency in Frontier Artificial Intelligence Act), New York’s Responsible AI Safety and Education (RAISE) Act targets “frontier” AI systems defined as models trained using more than 10²⁶ floating-point operations (FLOPs) of compute. Both laws require developers of such systems to produce formal safety documents that describe how they identify and mitigate “critical” or “catastrophic” harms. Those harms are defined in similar terms, focusing on AI systems that could materially assist in the creation of chemical, biological, radiological, or nuclear (CBRN) weapons, enable mass-casualty events, or cause more than $1 billion in economic damage. Many of the accompanying compliance mechanisms also look similar, including mandatory reporting of serious safety incidents and whistleblower protections for engineers and researchers who raise safety concerns. Looking only at these shared elements, New York’s law may look like it is simply extending California’s model eastward.

However, the differences between California’s and New York’s AI laws are far more telling than their similarities. The RAISE Act differs materially in scope, intent, and governance philosophy, and is better understood as a more intrusive regime that breaks from California’s already flawed approach while layering on additional, more onerous problems on top.

While both laws target frontier models, they apply very different financial filters to determine which companies are subject to the heaviest compliance obligations. California reserves its strictest rules for firms with more than $500 million in annual revenue. New York, by contrast, ignores revenue and focuses on the cost of a single training run. Any company that spends more than $100 million to train a model is treated as a “large developer” and subject to heightened oversight. This completely changes who the laws actually reach. A lean, pre-revenue startup or research lab could remain outside California’s strictest tier while falling squarely within New York’s regulatory net.

New York further extends that net through a knowledge distillation clause. Under the RAISE Act, if an organization uses a model that meets the 10²⁶ FLOPs threshold to train a more efficient derivative model, the resulting system falls into scope if its own training cost exceeds $5 million. It seems like New York regulators are trying to close what some see as a “shrinkage loophole”: the concern that developers could transfer frontier-level capabilities into models that are cheaper to train and easier to deploy, while escaping oversight tied to the original training run.

But pulling distilled models that fall well below the 10²⁶ FLOPs threshold into scope undercuts the logic of using the compute threshold in the first place. New York is effectively conceding that computational scale is a poor proxy for the risks it is trying to manage.

Supporters of the law will say this simply reflects reality. A 10²⁶ FLOPs threshold is a crude proxy for dangerous AI capabilities, since risk does not simply vanish when models become smaller, cheaper, or more efficient. On that view, regulation must be iterative: begin with imperfect stand-ins and adjust them as systems evolve, rather than wait for a flawless measure that will never arrive.

They are right that capability tiering is difficult and that any workable framework will have to evolve as AI systems change. But this is exactly why states cannot effectively regulate AI safety. If New York is right to include distilled models and add new thresholds, then California’s approach is, by definition, wrong. If California is right, then New York’s additions are unnecessary. Instead of the country moving systematically toward safeguards that actually work, each state is locking its own revision into law at a particular moment in time and freezing provisional judgments rather than improving them. The result is not just slower innovation, as companies are forced to navigate a patchwork of thresholds, tests, and definitions that never line up, but also weaker safety.

More quietly but equally consequentially, the laws diverge in their fundamental governing philosophies. Despite its flaws, California’s SB 53 is built around a “trust but verify” philosophy. It “trusts” developers to author their own internal safety plans, then “verifies” those claims by mandating companies to publish their frontier AI safety frameworks and submit high-level risk summaries to state agencies. California recognizes that public and market-facing pressures can manage risk without freezing innovation in place.

New York’s RAISE Act embodies the opposite philosophy, closer to “suspect and inspect.” Developers must maintain safety and security protocols, provide the Attorney General and state security agencies access to those unredacted materials on request, and submit to mandatory independent audits. The result is not just more paperwork, but a different relationship between developers and the state. Accountability shifts from being enforced through transparency and external pressure to being enforced through direct, ongoing scrutiny by government authorities.

The problem is that state agencies are not equipped to play that role. They are not set up to continuously evaluate fast-moving AI systems, replicate complex technical testing, or keep pace with rapidly evolving model architectures. That institutional mismatch is precisely why California lawmakers stripped out more intrusive oversight mechanisms after the governor vetoed SB 1047, a previous attempt at AI safety regulation. New York is now undoing that course correction. The outcome will be slower deployment, more defensive compliance, and greater regulatory drag, without clear evidence that safety outcomes improve as a result.

Governor Hochul says New York’s law moves the country closer to a unified approach to AI safety, but in practice, it does the opposite. By hard-coding a different set of thresholds, triggers, and oversight assumptions that differ from California’s, New York is not refining a shared national framework but fragmenting it further. Common language may create the appearance of alignment, but when states lock competing judgments into law, coherence becomes harder, not easier, to achieve.

Image credit for social media preview: Metropolitan Transportation Authority/Flickr

Back to Top