In a move that echoes his signature style of turning policy disputes into public battles, US President Donald Trump has once again ignited a very visible showdown – this time with an America-based AI firm Anthropic he has effectively cast as “unpatriotic.”
On Friday, the Trump administration directed federal agencies to halt use of Anthropic’s AI tools and moved to impose additional penalties. The clash centres on whether the military should have unrestricted access to Anthropic’s systems and whether those systems can be used for mass surveillance or fully autonomous weapons. Anthropic has refused to back down.
What triggered Trump vs Anthropic
The dispute escalated after Anthropic sought what it described as “narrow assurances” from the Pentagon. The company wanted guarantees that its AI chatbot Claude would not be used for mass surveillance of Americans or in fully autonomous weapons systems.
The Pentagon responded that it was not interested in such uses and would only deploy the technology in legal ways. However, it also insisted on access “without any limitations.”
Anthropic said new contract language would allow “safeguards to be disregarded at will.” CEO Dario Amodei said the company “cannot in good conscience accede” to the demands, reported Associated Press.
This did not sit well with Trump, who took to Truth Social, wrote in all-caps, “The United States of America will never allow a radical left, woke company to dictate how our great military fights and wins wars!” He added Anthropic made a mistake trying to strong-arm the Pentagon.
Anthropic draws red line
Responding to the administration’s move, Anthropic released a strongly worded statement on Friday local time.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” the AI company’s statement read.
It further explained why it was drawing a red line. “First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians.”
“Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights,” Anthropic added.
It also also pushed back against the “supply chain risk” label being floated by the administration.
“Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company.”
Calling such a move both legally and historically problematic, the company warned that the “designation would both be legally unsound and set a dangerous precedent for any American company that negotiates with the government.”
Trump and Hegseth lash out
President Trump took to Truth Social, saying the government no longer needed the company’s services.
“We don’t need it, we don’t want it, and will not do business with them again!”
He ordered most agencies to immediately stop using Anthropic’s AI, though the Pentagon has been given six months to phase out technology already embedded in military systems.
He also warned the company to “better get their act together, and be helpful” during the phase-out period or face “major civil and criminal consequences to follow.”
Defense secretary Pete Hegseth went further, calling Anthropic a “supply chain risk.”
Hegseth said the Pentagon “must have full, unrestricted access to Anthropic’s models for every LAWFUL purpose in defense of the Republic.”
Top Pentagon spokesman Sean Parnell also weighed in, saying Anthropic’s stance was “jeopardizing critical military operations and potentially putting our warfighters at risk.”
Dispute stuns Silicon Valley: Musk, Altman react
Executives, venture capitalists and AI researchers — including employees from rivals — voiced support for Amodei’s position. The debate has spilled into open letters and public forums.
Elon Musk sided with the administration, writing on X that “Anthropic hates Western Civilization.” His company’s chatbot, Grok, is expected to gain access to classified military networks — a move that could benefit from Anthropic’s exit.
In contrast, Sam Altman, CEO of OpenAI and a former colleague of Amodei, publicly supported Anthropic’s red lines.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman told CNBC. He also questioned what he described as the Pentagon’s “threatening” approach.
However, the dispute appears to have opened the door for a rival. OpenAI has reached a deal to deploy its AI models within the classified network of the US Department of War, CEO Sam Altman said in a post on Saturday.



