Claude AI, the flagship large language model AI product from Anthropic, is currently “not working as intended.” The cause of the outage has been attributed to the login/logout pathways, the company confirmed.
In an official status update on their website, Anthropic stated, “We have identified that the Claude API is working as intended. The issues we are seeing are related to Claude.ai and with the login/logout paths.”
This status update was issued at 12:21 UTC, which corresponds to approximately 6 p.m. on Monday. The company has been looking into the outage since 11:49 UTC (5:19 p.m. IST).
Claude, recognized for its applications in the white-collar and IT sectors, also appears to be utilized in military operations, with the most recent instance being its deployment in US operations in Iran on February 28.
U.S. government suspends use of Anthropic and increases penalties
On Friday, the Trump administration mandated that all U.S. government agencies cease utilizing Anthropic’s AI systems and imposed further penalties, escalating a conflict with the company regarding access and safety. President Donald Trump, Defense Secretary Pete Hegseth, and other officials criticized Anthropic on social media for failing to provide the military with unrestricted access by the specified Friday deadline.
“We don’t need it, we don’t want it, and will not do business with them again!” Trump stated, while Hegseth dubbed the firm a “supply chain risk, ” terminology that could impact Anthropic’s collaborations.
Anthropic announced its intention to challenge the government’s action in court, describing it as unprecedented and legally unsound, asserting that it had “never before publicly applied to an American company.” The company had requested limited assurances from the Pentagon that Claude would not be utilized for mass surveillance of Americans or in fully autonomous weapon systems.
The Pentagon replied that it had no plans to employ the technology in such manners and would use it strictly within legal parameters, yet it insisted on the necessity of unrestricted access.
“No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court,” Anthropic stated in another statement.



