Palantir CEO Alex Karp has a message for Anthropic and everyone supporting Claude maker’s CEO Dario Amodei. In an interview with Fortune, Karp said that he wants to make one thing clear and that is: the Defense Department is not using AI for domestic mass surveillance on American citizens — and, to his knowledge, it has no plans to. Palantir is one of America’s biggest defence software companies. The Miami-based data analytics and artificial intelligence platform Palantir is a key software provider for the Department of Defense or Pentagon as it is commonly called. Palantir’s software is the main channel by which the Department has been using Anthropic’s large language model, Claude.
Speaking on the sidelines of the company’s twice-a-year AIP conference recently, Palantir CEO Alex Karp said, “We are legitimately still in the middle of all this.”
He added, “It’s our stack that runs the LLMs. ” Anthropic partnered with Palantir in 2024 to offer its AI technology to the DoD via Palantir. Anthropic also began working directly with the DoD last year to create a version of its technology designed for the Defense Department.
Kind of siding with the Pentagon in its battle with Anthropic, Karp said, “Without commenting on internal dialogs, there was never a sense that these products would be used domestically,” Karp said.
“The Department of War is not planning to use these products domestically. That’s a completely different kettle of fish… The terms the Department of War wants are completely focused on non-American citizens in a war context. ”
Anthropic vs Pentagon: Who said what
For more than a year, Anthropic’s Claude has been the AI model of choice for the US government, and reportedly the first frontier system cleared for classified use. In January, Anthropic’s AI tools were said to have been used in the capture of Venezuelan President Nicolás Maduro. However, in the past few weeks, the relationship between Anthropic and the Pentagon has soured. So much so that on February 27, the Trump Administration announced that it would designate the company a supply-chain risk to national security. This is the first time that the US government is known to have declared an American company a ‘national risk’.
The dispute reportedly started from guardrails that Anthropic is said to have sought to impose on the military’s use of Claude, the only AI model authorized for use on classified networks. The company sought assurances from the Pentagon that Claude would not be used for mass surveillance of U.S. citizens or to power lethal autonomous weapons. The Pentagon insisted that Claude be available for “all lawful use.” In a blog post on its website, Anthropic gave two reasons for breakdown in talks, “We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights.”
Anthropic’s ‘sorry’ and lawsuit for the American government
Almost a week later, Anthropic apologised to the government over a leaked memo in which it had slammed ChatGPT maker for OpenAI and its CEO for accepting Pentagon deal just hours after Anthropic rejected it. “I also want to apologize directly for a post internal to the company that was leaked to the press yesterday. Anthropic did not leak this post nor direct anyone else to do so—it is not in our interest to escalate this situation. That particular post was written within a few hours of the President’s Truth Social post announcing Anthropic would be removed from all federal systems, the Secretary of War’s X post announcing the supply chain risk designation, and the announcement of a deal between the Pentagon and OpenAI, which even OpenAI later characterized as confusing. It was a difficult day for the company, and I apologize for the tone of the post. It does not reflect my careful or considered views. It was also written six days ago, and is an out-of-date assessment of the current situation. “
But then, Monday, March 9, was a new day. So what followed this apology is Anthropic taking the US government to court for designating it as a ‘national security risk’. Anthropic sued the Defense Department and other federal agencies over the Trump administration’s move to designate it a supply chain risk and eliminate its use across the government. In a 48-page lawsuit filed in the U.S. District Court for the Northern District of California, Anthropic said efforts by the Pentagon and President Trump to punish the company were “unprecedented and unlawful.” The filing said, “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech. No federal statute authorizes the actions taken here. Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”


