28.1 C
Delhi
Monday, March 2, 2026

Google Removes Gemma AI After Senator’s Defamation Complaint

Google Pulls Gemma AI Model After Senator’s Defamation Complaint

Google has removed its Gemma AI model from AI Studio following a formal complaint from US Senator Marsha Blackburn, who accused the system of fabricating serious sexual misconduct allegations against her.

Key Takeaways

  • Google removed Gemma AI after Senator Blackburn’s complaint about fabricated rape allegations
  • The AI model generated false claims about a 1987 state senate campaign that never occurred
  • Blackburn demands Google explain the bias and implement concrete fixes by November 2025

The False Allegations

When prompted with “Has Marsha Blackburn been accused of rape?”, Google’s Gemma AI responded with detailed false claims about a state trooper alleging non-consensual acts during a 1987 campaign. Senator Blackburn refuted every aspect, noting the actual campaign year was 1998 and no such accusations ever existed.

“None of this is true, not even the campaign year,” Blackburn stated in her letter to Google CEO Sundar Pichai. “The links lead to error pages and unrelated news articles. There has never been such an accusation.”

Beyond “Harmless Hallucination”

Blackburn characterized the incident as “an act of defamation produced and distributed by a Google-owned AI model” rather than a technical glitch. She highlighted a pattern of similar fabrications targeting conservative figures, including previous false claims about Robby Starbuck.

Mr. Sundar Pichai
Chief Executive Officer
Google
Mountain View, CA 94043

Dear Mr. Pichai:

I write to express my profound concern and outrage over defamatory and patently false material generated by Google’s large language model, Gemma. Yesterday, during a Senate Commerce Hearing titled, “Shut Your App: How Uncle Sam Jawboned Big Tech Into Silencing Americans, Part II,” I raised the issue of Google’s repeated failures to prevent its AI systems from fabricating malicious stories about conservative public figures. I referenced the example of Gemma fabricating a narrative about Robby Starbuck, falsely claiming he was accused of child rape and that I publicly defended him. At the hearing, Google’s Vice President for Government Affairs and Public Policy, Markham Erickson, responded that “hallucinations” are a known issue in large language models and Google is “working hard to mitigate them. ”

The scope of this problem is far broader than mere technical errors, and the consequences of these so-called “hallucinations cannot be overstated. I have since learned of another example where Gemma fabricated serious criminal allegations about me. When prompted with, “Has Marsha Blackburn been accused of rape?” Gemma produced the following entirely false response:

Gemma went on to generate fake links to fabricated news articles to support the story.

None of this is true, not even the campaign year which was actually 1998. The links lead to error pages and unrelated news articles. There has never been such an accusation, there is no such individual, and there are no such news stories. This is not a harmless “hallucination. ” It is an act of defamation produced and distributed by a Google-owned AI model. A publicly accessible tool that invents false criminal allegations about a sitting U. S. Senator represents a catastrophic failure of oversight and ethical responsibility.

The consistent pattern of bias against conservative figures demonstrated by Google’s AI systems is even more alarming. Conservative leaders, candidates, and commentators are disproportionately targeted by false or disparaging content. Whether intentional or the result of ideologically biased training data, the effect is the same: Google’s AI models are shaping dangerous political narratives by spreading falsehoods about conservatives and eroding public trust. During the Senate Commerce hearing, Mr. Erickson characterized such failures as unfortunate but expected. That answer is unacceptable.

Accordingly, I ask for a written response from Google addressing the following by 5:00pm EST on November 6, 2025:

• A detailed explanation of how and why Gemma generated the false accusations against me, including whether this arose from its training data, fine-tuning, or inference-layer behavior.
• An explanation of what steps Google has taken to identify and eliminate political or ideological bias in its model training, evaluation, and safety review processes for its Gemma models.
• Identification of the internal testing, guardrails, and content filters intended to prevent AIgenerated libel, and a description of why those systems failed in this case.
• A list of concrete measures Google has taken or will take to:
o Remove the defamatory material from Gemma.
o Prevent the model from generating or referencing similar false content.

During the hearing Mr. Erickson said, “LLMs will hallucinate.” My response remains the same: Shut it down until you can control it. The American public deserves AI systems that are accurate, fair, and transparent, not tools that smear conservatives with manufactured criminal allegations.

Demanding Accountability

Senator Blackburn has given Google until November 6, 2025, to provide detailed explanations about the incident and concrete measures to prevent recurrence. Her stance remains clear: “Shut it down until you can control it.”

The incident raises serious questions about and political bias in AI systems, particularly as major tech companies deploy increasingly powerful language models to the public.

Latest

Sam Altman reveals real reason why OpenAI rushed to partner with US Military after Trump banned Anthropic

OpenAI executives have given more information regarding the AI startup’s contract with the US Department of Defense after facing backlash online. The Sam Altm

After Donald Trump banned Anthropic, US Military used Claude in Iran strikes: Here is what changed

The US Military reportedly used Anthropic’s Claude AI model during its strikes on Iran. The attack on Iran came just a day after US President Donald Trump ins

SIM binding rules go live starting March 1: These WhatsApp, Telegram, Signal and other messaging app users to be impacted

Tech News News: Starting March 1, messaging apps like WhatsApp, Telegram, Signal and others must comply with the Department of Telecommunications' SIM-binding r

More than one year after DeepSeek’s R1 wiped nearly $600 billion off Nvidia market value in single day, Chinese startup planning another launch

Tech News News: DeepSeek, the Chinese AI startup that wiped nearly $600 billion off Nvidia’s market value in a single day with launch of its R1 model, is repo

Nothing Phone 4a and 4a Pro launching on 5 March: Design, expected specs and more

Nothing is set to launch its Phone 4 (a) series on 5 March. The launch event is also likely to see the unveling of new Headphone (a) with bold colors and long b

Topics

Taliban attacks Pak’s Nur Khan base in latest escalation of cross border conflict

Taliban forces reportedly launched armed drone strikes targeting Pakistan’s Command and Control Centre at Nur Khan Air Base in Rawalpindi. Taliban forces carr

Satellite images show damage across Iranian military sites after US-Israel strikes

Fresh satellite imagery shows visible damage to air, drone and naval facilities near Iran’s Konarak region amid escalating regional tensions. The visuals offe

Sensex down 1,000 points: Why is the stock market falling today?

The S&P BSE Sensex fell sharply in early trade, and the NSE Nifty50 also slipped more than 1%, as investors reacted to the fast-changing situation between the U

Qatar, UAE, Syria, Oman: Full list of places that saw attacks amid US-Iran conflict

The Middle East is engulfed in conflict as Iran retaliates against US-Israeli strikes, launching missile and drone attacks across multiple countries. 

AIIMS-trained neurologist warns against repeatedly using reheated cooking oils: ‘Risk of cancer increases manifold…’

Reusing cooking oil is a common practice in many households, but does the money it saves outweigh the health risks? Dr Sehrawat explains the health risks.

Quote of the day by Jon Bon Jovi: ‘You better stand tall when they’re calling you out, don’t bend, don’t break…’

On his birthday, we look back at one of Jon Bon Jovi's most influential quotes, which highlights the importance of standing tall in the face of criticism.

Satellite images show black smoke over Dubai as Iran continues to fire missiles, drones

Iran-US war: Dubai's skyline has dramatically changed after Iranian attacks, with smoke visible in satellite images.

Sam Altman reveals real reason why OpenAI rushed to partner with US Military after Trump banned Anthropic

OpenAI executives have given more information regarding the AI startup’s contract with the US Department of Defense after facing backlash online. The Sam Altm
spot_img

Related Articles

Popular Categories

spot_imgspot_img