Why Objections to Federal Preemption of State AI Laws Are Wrong
With Congress once again considering federal preemption of state AI laws, opponents of this idea are raising all sorts of objections. Federal preemption may sound wonky, but the idea is simple: When it comes to general-purpose technologies, there shouldn’t be 50 different state rulebooks. AI is exactly one of those cases.
For those who may be confused about federal preemption, here are 15 frequently asked questions. The answers show why the most common objections to it fall apart.
1. Why should the federal government preempt state AI laws?
Federal preemption isn’t new or exotic. Congress does this all the time for things that obviously require national consistency—airline safety, food labeling, aviation rules, interstate trucking, and more. You don’t want your plane regulated differently depending on which state it happens to fly over, and you don’t want AI rules swinging wildly at every state border either. That’s not federal overreach—it’s common sense.
2. Doesn’t federal preemption undermine states’ rights?
No. The Constitution gives Congress the power to regulate interstate commerce—and delivering AI services on the Internet is a clear example of such activity, bringing it squarely within Congress’s authority to regulate. Preemption isn’t a federal takeover of city hall; it’s Congress doing the job the Commerce Clause explicitly assigns it.
3. Why shouldn’t states act as ‘laboratories of democracy’ for AI?
“Laboratories of democracy” is great for policy experiments in schools, housing, transportation, or public health. It is not great when the “experiment” is forcing every nationally distributed AI model to comply with 50 conflicting design rules. States should not experiment with policies that put the rest of the country at risk, such as dictating rules that undermine the development and use of AI in other states.
4. Why preempt states before Congress passes its own AI law? Congress isn’t doing anything anyway.
Federal preemption forces everyone to come to the table and have a real debate. Let’s be honest: the groups aggressively pushing state bills aren’t doing it because they think “states know best”—they’re forum-shopping for the friendliest legislature to pass the most extreme version of their agenda. Preemption puts an end to that workaround by moving the discussion to the national stage, where it belongs. Without preemption, there’s zero incentive to negotiate a reasonable compromise in Congress, because advocates can always skip the hard work and run to whichever state already agrees with them.
5. If the federal government won’t regulate, why shouldn’t states step in to protect residents?
Preemption doesn’t mean “no rules.” It means one set of rules, made at the federal level, where interstate technologies actually get handled. Congress has already taken action, such as the Take It Down Act, passed earlier this year. But even without additional AI-specific laws, many real harms are already addressed by existing federal and state laws.
6. Won’t preemption stop states from responding quickly to new AI risks?
The real danger isn’t moving too slowly—it’s panicking and passing laws that hurt the very people they claim to protect. The EU is already scrambling to unwind parts of its AI law before it fully lands because it went too far, too fast. Much of what’s being pitched as “AI harm” is already covered under existing discrimination, consumer protection, and employment laws. If a company discriminates, that’s illegal whether it makes those decisions using AI, Excel, or a dartboard. We don’t need a separate law for every tool in the tool chest.
7. If states already regulate consumer protection, civil rights, privacy, and safety, why shouldn’t they regulate AI too?
States can still enforce their broad protections against discrimination, fraud, and harm. What they shouldn’t do is pass product-specific rules dictating how technology itself must be built. For example, states have laws requiring bicyclists to wear helmets, but they don’t regulate the physical design of bicycles. AI should be no different. States can regulate uses—such as how schools or police use AI—but not development.
8. Won’t national standards just get watered down so states can’t set higher protections?
Maybe the federal standard ends up stronger, maybe not—but federal policymaking incorporates views on national security, economic competitiveness, and international leadership. States do not. A rule that seems “tough” from one state’s perspective may be disastrous for national competitiveness or for U.S. leadership relative to China. Congress has to balance these tradeoffs. States don’t.
9. Isn’t preemption just amnesty for AI companies?
No. Preemption doesn’t eliminate accountability. It eliminates duplicative or conflicting state accountability. Companies would still answer to federal regulators, federal law enforcement, courts, Congress, and the public. Opposing 50 different state rulebooks is not the same thing as “no consequences.”
10. Doesn’t preemption just help Big Tech avoid stronger state rules?
Sure, Big Tech benefits from clear national rules—but so do startups, nonprofits, universities, small and medium-sized businesses, and anyone else trying to build or deploy AI. State-level AI laws hurt everyone, not just Google or Meta. If anything, big companies are the ones best positioned to hire hundreds of lawyers and comply with 50 state regimes. It’s small and mid-sized innovators who get crushed.
11. Won’t preemption stop states from enforcing civil rights, consumer protection, and anti-discrimination laws when AI is involved?
Nope. Those laws still apply—just not AI-specific laws that create carve-outs where people get protections only if a robot wrongs them. Preemption pushes states to enforce broad, technology-neutral protections that apply whether the harm comes from a human, a spreadsheet, or an AI system.
12. Isn’t this really about stopping conservative states from passing anti-‘woke AI’ laws?
Not quite. Some state AI bills—Colorado’s being a good example—would require companies to follow rigid “fairness” or “differential treatment” mandates that inevitably push developers to over-correct for fear of complaints. Those rules don’t eliminate bias; they just encourage companies to censor anything controversial. If you already think Silicon Valley leans too heavily in one political direction, giving states the power to hard-code their preferred ideology into AI systems won’t make that problem better.
13. Isn’t this really about stopping liberal states from passing progressive AI laws?
Not exactly. The dynamic cuts both ways: Just as some liberal states want to require AI systems to reflect their values, some conservative states want to prohibit AI from touching topics they consider “woke.” That could easily lead to one state trying to dictate how chatbots talk about abortion, race, or LGBTQ issues for everyone. If you don’t want either side’s legislature deciding what AI is allowed to say nationwide, that’s precisely the argument for federal preemption—not against it.
14. Profit-hungry tech companies made social media that spreads misinformation, erodes privacy, and hurts youth—won’t they do the same with AI?
That narrative isn’t true, but even if it was, while state laws might chip away at some problems locally, a patchwork of 50 different AI laws won’t meaningfully fix them nationally. Instead, they create enormous compliance burdens for U.S. companies, even those based in other states, while doing very little to meaningfully change the behavior of bad actors overseas or fly-by-night developers who can simply avoid U.S. jurisdictions entirely. Meanwhile, the companies that do try to comply end up stuck navigating a legal maze that only the biggest firms can handle.
The real tools for dealing with harmful products haven’t changed—broad privacy laws, consumer protection rules, civil rights enforcement, product liability, the FTC, and Congress. If you want to prevent “the next social media crisis,” the solution isn’t 50 competing AI rulebooks that bog down responsible developers; it’s enforcing the broad protections we already have and creating smart national policies that actually address real harms without kneecapping the entire ecosystem.
15. If states want to ban extreme or harmful AI products—like a teddy bear that talks to kids in sexually explicit ways—why shouldn’t they be allowed to?
Yes, some companies make awful products—but that’s why the United States already has tort law, the FTC, Congress, and the court of public opinion. Sexual exploitation, child endangerment, and obscene content are illegal everywhere, and federal preemption doesn’t weaken any of those protections. If someone is selling “pedo bears,” the answer isn’t 50 conflicting state AI design mandates—which won’t stop anyone from buying the toy in one state and carrying it into another anyway—it’s enforcing the laws we already have, holding bad actors accountable, and expecting a baseline level of responsible parenting.
Moving Forward
The bottom line is that AI is a general-purpose technology powering the U.S. economy. Letting 50 states write 50 conflicting rulebooks doesn’t protect Americans—it fragments markets, crushes innovation, and hands global leadership to China, which regulates the technology, but not with a conflicting set of rules that ties its own industry in knots.
Federal preemption isn’t anti-democratic or anti-consumer. It’s just the common-sense way to regulate a technology that spans state lines. If the United States is to lead in AI, it needs one clear, coherent national framework—not 50 incompatible ones.
Image credit for social media preview: Jeffrey Zeldman/Flickr
