US government used AI tools from Anthropic during the air attack launched on Iran just hours after declaring that it would stop using technology from the AI startup. As per a report by The Wall Street Journal, commands around the world, including U.S. Central Command in the Middle East, used Anthropic’s Claude AI during the Iran attack.
Reportedly, the command used Anthropic’s AI for intelligence assessments, target identification and simulating battle scenarios. Prior to the Iran attack, another WSJ report had revealed that Anthropic’s AI was also used by the Pentagon during the capture of Venezuela president Nicolás Maduro.
The report noted that the use of Claude in high-profile missions is among the reasons why the US administration had said that it would take six months to phase out the technology from the AI startup.
In a Truth Social post about ending the deal with Anthropic, US President Donald Trump had gone on to call the company ‘leftwing nut jobs’ and ‘woke’ while claiming that ‘their selfishness is putting AMERICAN LIVES at risk, our Troops in danger, and our National Security in JEOPARDY.’
Trump had directed all federal agencies in the US to ‘immediately cease’ using Anthropic technology.
“We don’t need it, we don’t want it, and will not do business with them again! There will be a six-month phase-out period for agencies like the Department of War who are using Anthropic’s products at various levels,” he wrote.
US and Anthropic feud over AI safety:
Pentagon and Anthropic had been arguing for months over how the company’s AI models are used in national defence. The AI startup said that it had allowed the US DoD to use Anthropic technology for purposes with two exceptions: mass domestic surveillance of Americans and fully autonomous weapons.
Anthropic has also challenged the US designation of the company as a ‘supply chain risk’ and said it will contest it in court.
“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so,” the company wrote in a blogpost.
“We believe this designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government,” it added



