California’s AI Transparency Law Is a Misstep Other States Should Avoid
California often sets the tone for national policy, but the recently passed AI Transparency Act (SB 942) is an example of what not to do. Despite its well-intentioned aim to address harmful AI-generated content, the law misunderstands both the mechanics of AI and how people respond to deceptive media. Instead of offering practical solutions, this law imposes impractical burdens on users and providers, stifles innovation, and falls far short of its promises. This act serves as a cautionary tale for the rest of the country about legislation that fails to both protect society and promote progress.
The law mandates providers of generative AI tools with over one million monthly users to offer three key services: (1) a free detection tool allowing users to upload content—such as images, videos, or audio—to check if it was generated or altered by the provider’s AI system; (2) an option that gives users the choice to add a clear, visible, and unremovable label on any AI-generated content, indicating that the content was created or altered by an AI system; and (3) n automatic, hidden, non-removable label embedded in AI-generated content that includes the provider’s name, the AI system version, the content’s creation or modification date and time, and a unique identifier. Non-compliant companies face a $5,000 daily fine.
The law’s biggest flaw is its reliance on watermarks—a unique signal embedded in AI-generated content. However, watermarks are vulnerable to manipulation and circumvention. For instance, cropping an AI-generated image can remove a visible watermark, while more sophisticated editing can erase even the most robust invisible watermarks. Unless watermarking technology advances significantly, mandating their use risks creating a gap between regulatory intentions and technical feasibility.
Even if watermarks someday become accurate, reliable, and universally applied to all AI-generated content, they simply won’t prevent many high-risk scenarios, as people are unlikely to check for watermarks in moments of crisis. For instance, a common AI-driven threat is voice cloning scams, where victims receive hyper-realistic calls or recordings from someone posing as a friend or family member, urging them to respond to an urgent situation. These scams succeed because people often act emotionally, not logically, in stressful moments. Even with widely available and user-friendly AI watermark detection tools, panic can override caution, making watermarks ineffective against some of the most serious risks.
Defenders of the law may argue that regulation has to start somewhere, but this legislation risks making the problem worse. Each AI model can have its own watermark detection tool, so checking a piece of content for a watermark does not tell users whether a watermark exists—only whether a watermark from that particular model is present.If users upload suspicious content to a detection tool and it doesn’t find a watermark (because the content wasn’t generated by that specific platform), they may be falsely reassured. Indeed, users may get negative results from dozens of detection tools—not because the tools do not work, but because the content wasn’t generated by any of those AI models—thereby creating the perception that fake material is legitimate. Rather than reducing risks, the law could unintentionally amplify the harm it aims to prevent by reinforcing false claims of authenticity.
Moreover, the law only applies to large AI providers, leaving smaller systems unregulated and creating loopholes that bad actors could exploit. Unregulated models could be used to produce content without watermarks, introducing significant gaps that weaken the law’s effectiveness. California’s jurisdiction also stops at its borders, putting California-based companies at a disadvantage due to compliance costs that out-of-state and foreign competitors can easily avoid. As a result, instead of curbing harmful AI practices, the law risks driving innovation elsewhere while failing to address the borderless nature of AI risks.
Finally, the law creates new challenges by placing an undue burden on AI startups, forcing them to develop and implement watermark detection tools on a tight timeline—with less than 16 months to comply before the law takes effect in January 2026. Early-stage companies are now forced to divert resources from innovation that could benefit their customers and enhance competitiveness with foreign rivals. Instead, they must prioritize compliance with ineffective regulations that ultimately stifle growth.
Rather than setting a responsible precedent for AI regulation, California showcases the risks of poorly designed legislation. Policymakers should find a way to balance safety with technological progress, but SB 942 only serves as a reminder of how easily regulation can misfire. For other states watching, the takeaway is clear: California’s approach to regulating AI is not the way forward.