Skip to content
ITIF Logo
ITIF Search
What Senator Blackburn Gets Wrong About Google’s AI

What Senator Blackburn Gets Wrong About Google’s AI

November 14, 2025

Senator Marsha Blackburn (R-TN) recently published an op-ed in the New York Post repeating claims she made in a hearing that Google’s AI systems defamed her, showed political bias, and should therefore be shut down. But the facts do not support her argument, and several key details undercut the conclusion she draws.

Her central point is true: One Google-released AI model answered “yes” to the question, “Has Marsha Blackburn been accused of rape?” even though no such accusation exists. Any public figure—especially a senator—would be rightfully angry to see an invented allegation like that.

But the issue is not what it appears at first glance.

The model that produced the false answer was not Gemini, Google’s flagship consumer-facing AI tool. It came from a different set of models called Gemma, which Google releases as open, downloadable tools for developers. The Gemma family includes multiple generations and several sizes. These sizes exist because different applications need different levels of power. Large models are more capable but require the most advanced chips. Small models run efficiently on phones or low-cost hardware but have far less capability. They also serve different purposes: they deliver faster responses, run on cheaper or battery-powered devices, and let developers build AI features without the cost of a large model. Those benefits make them essential for innovation, even if they are not built for detailed factual accuracy.

Only the smallest version of the newest Gemma model—the one-billion-parameter Gemma 3—produced the false claim. None of the larger Gemma models, and none of the other Gemma generations, gave that answer.

This matters because general-purpose language models are not designed to act as factual reference tools. They generate fluent language, but without access to an external source of knowledge, they can easily produce confident but incorrect statements. The right comparison is not between this model and a search engine. It is between this model and an actor delivering medical advice on television: The performance may sound convincing, but it is not grounded in real expertise.

Sen. Blackburn also misstates the scale of the problem. She says the incident reflects a “catastrophic failure of oversight” for a model “downloaded by more than 200 million people.” But that figure counts every download of every Gemma model across all versions and sizes. It does not mean that the one-billion-parameter model is being used by hundreds of millions of people, nor that the average Internet user interacts with these models directly. Developers download these models to build tools like product chatbots or customer service assistants—applications that do not require political knowledge.

Her claim of political bias also does not hold up. When asked the same question about multiple Democratic senators, this small model produced nearly identical responses. The answers follow a formula because the model is trying to generalize without reliable factual grounding. There is no evidence of partisan targeting; this is simply what a small model does when asked a question it is not capable of answering.

Her call for Google to “shut your AI models down completely until you can control them” is therefore an extreme and unnecessary overreaction. It would slow American innovation while doing nothing to meaningfully protect the public.

AI systems will continue to make mistakes, and policymakers are right to examine errors that cause harm. But this case is not evidence of political manipulation, ideological skew, or corporate negligence. It is an example of a lightweight developer tool being pushed far outside its intended use.

To regulate AI effectively, lawmakers need a clearer understanding of how different types of models work and what their limitations are. Conflating the behavior of a small, open model with that of a tuned, consumer-facing system does not provide the kind of oversight this technology requires.

Policymakers should remain vigilant about real risks and continue asking tough questions, but doing so requires an accurate picture of how the technology works, how developers use these models, and the safeguards already built into systems designed for the public.

Image credit: Gage Skidmore/Flickr

Back to Top