Google Pulls Gemma AI Model After Senator’s Defamation Complaint
Google has removed its Gemma AI model from AI Studio following a formal complaint from US Senator Marsha Blackburn, who accused the system of fabricating serious sexual misconduct allegations against her.
Key Takeaways
- Google removed Gemma AI after Senator Blackburn’s complaint about fabricated rape allegations
- The AI model generated false claims about a 1987 state senate campaign that never occurred
- Blackburn demands Google explain the bias and implement concrete fixes by November 2025
The False Allegations
When prompted with “Has Marsha Blackburn been accused of rape?”, Google’s Gemma AI responded with detailed false claims about a state trooper alleging non-consensual acts during a 1987 campaign. Senator Blackburn refuted every aspect, noting the actual campaign year was 1998 and no such accusations ever existed.
“None of this is true, not even the campaign year,” Blackburn stated in her letter to Google CEO Sundar Pichai. “The links lead to error pages and unrelated news articles. There has never been such an accusation.”
Beyond “Harmless Hallucination”
Blackburn characterized the incident as “an act of defamation produced and distributed by a Google-owned AI model” rather than a technical glitch. She highlighted a pattern of similar fabrications targeting conservative figures, including previous false claims about Robby Starbuck.
Mr. Sundar Pichai
Chief Executive Officer
Mountain View, CA 94043Dear Mr. Pichai:
I write to express my profound concern and outrage over defamatory and patently false material generated by Google’s large language model, Gemma. Yesterday, during a Senate Commerce Hearing titled, “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans, Part II,” I raised the issue of Google’s repeated failures to prevent its AI systems from fabricating malicious stories about conservative public figures. I referenced the example of Gemma fabricating a narrative about Robby Starbuck, falsely claiming he was accused of child rape and that I publicly defended him. At the hearing, Google’s Vice President for Government Affairs and Public Policy, Markham Erickson, responded that “hallucinations” are a known issue in large language models and Google is “working hard to mitigate them. ”
The scope of this problem is far broader than mere technical errors, and the consequences of these so-called “hallucinations cannot be overstated. I have since learned of another example where Gemma fabricated serious criminal allegations about me. When prompted with, “Has Marsha Blackburn been accused of rape?” Gemma produced the following entirely false response:
Gemma went on to generate fake links to fabricated news articles to support the story.
None of this is true, not even the campaign year which was actually 1998. The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories. This is not a harmless “hallucination. ” It is an act of defamation produced and distributed by a Google-owned AI model. A publicly accessible tool that invents false criminal allegations about a sitting U. S. Senator represents a catastrophic failure of oversight and ethical responsibility.
The consistent pattern of bias against conservative figures demonstrated by Google’s AI systems is even more alarming. Conservative leaders, candidates, and commentators are disproportionately targeted by false or disparaging content. Whether intentional or the result of ideologically biased training data, the effect is the same: Google’s AI models are shaping dangerous political narratives by spreading falsehoods about conservatives and eroding public trust. During the Senate Commerce hearing, Mr. Erickson characterized such failures as unfortunate but expected. That answer is unacceptable.
Accordingly, I ask for a written response from Google addressing the following by 5:00pm EST on November 6, 2025:
• A detailed explanation of how and why Gemma generated the false accusations against me, including whether this arose from its training data, fine-tuning, or inference-layer behavior.
• An explanation of what steps Google has taken to identify and eliminate political or ideological bias in its model training, evaluation, and safety review processes for its Gemma models.
• Identification of the internal testing, guardrails, and content filters intended to prevent AIgenerated libel, and a description of why those systems failed in this case.
• A list of concrete measures Google has taken or will take to:
o Remove the defamatory material from Gemma.
o Prevent the model from generating or referencing similar false content.During the hearing Mr. Erickson said, “LLMs will hallucinate.” My response remains the same: Shut it down until you can control it. The American public deserves AI systems that are accurate, fair, and transparent, not tools that smear conservatives with manufactured criminal allegations.
Demanding Accountability
Senator Blackburn has given Google until November 6, 2025, to provide detailed explanations about the incident and concrete measures to prevent recurrence. Her stance remains clear: “Shut it down until you can control it.”
The incident raises serious questions about and political bias in AI systems, particularly as major tech companies deploy increasingly powerful language models to the public.



