Google is facing fresh internal criticism after one of its own AI researchers publicly said he felt “ashamed” of the company over a newly reported Pentagon contract involving classified work. The controversy has once again raised questions about how far big tech companies should go in supplying artificial intelligence tools to the military, especially when many employees remain uneasy about the risks.
The criticism came from Andreas Kirsch, a research scientist at Google DeepMind, who reacted strongly after reports claimed Google had expanded its relationship with the US Department of Defense. In a post on X, Kirsch said he was “speechless” and called the move shameful.
“I’m speechless at Google signing a deal to use our AI models for classified tasks. Frankly, it is shameful,” he wrote.
His remarks have drawn attention because they come from inside Google’s AI division, one of the most important teams working on advanced AI. Kirsch later told Business Insider that he had gone to sleep hoping an internal employee letter would make leaders reconsider the decision, but woke up to news that the contract had moved ahead.
“When I went to bed yesterday, I was hopeful that the employee letter would have an effect and give us pause to consider,” he said. “This morning I woke up to a worst-case version of the contract being signed by Google in the meantime.”
Hundreds of Google employees had opposed the deal
According to reports, more than 600 Google employees had signed a letter asking CEO Sundar Pichai not to allow the Pentagon to use the company’s AI in classified operations. Staff concerns reportedly focused on the possibility of AI being used in lethal autonomous weapons, surveillance systems, or other military decisions where outcomes could be dangerous.
In the letter, employees said people building AI systems have a responsibility to speak up when those tools may be used in unethical ways.
“As people working on AI, we know that these systems can centralise power and that they do make mistakes,” the letter said.
Some employees were said to be disappointed by the decision, while others claimed they were not surprised. One major concern among staff is whether Google would have any real control over how its technology is ultimately used once it enters classified government environments.
Google says AI will support logistics and cybersecurity
Google has defended the agreement, saying the classified work is an amendment to an existing Pentagon contract. The company previously signed a deal late last year for unclassified use of its AI systems.
A Google spokesperson told Business Insider that the company supports both classified and non-classified government projects in areas such as logistics, cybersecurity, diplomatic translation, fleet maintenance, and protecting critical infrastructure.
The company also said it remains opposed to using AI for domestic mass surveillance or autonomous weapons without proper human oversight.
Still, critics inside the company remain unconvinced. Kirsch argued that recent changes to Google’s AI principles had removed stronger earlier commitments, including promises not to use AI for weapons or surveillance. He said the company now appears willing to permit any lawful use, which he warned could include controversial military applications.
“I personally feel incredibly ashamed right now to be Senior Research Scientist at Google DeepMind,” he said.
Google is not alone. In recent years, companies such as OpenAI, Anthropic, xAI, Anduril, and Palantir have all increased ties with defense agencies. As AI becomes more powerful, governments are turning to tech firms for tools related to intelligence, security, and battlefield support.
But the backlash at Google shows that not everyone inside Silicon Valley agrees with that direction. The issue for workers is no longer whether AI can be used in defense, but whether companies are moving too quickly without clear rules or strong ethical safeguards.


