By Akash Sriram
March 9 (Reuters) – Anthropic sued the U.S. government on Monday, escalating a dispute the AI company frames as retaliation for refusing to remove safety limits on its Claude model.
The Amazon-backed company said it was willing to work with the military. Just not on any terms.
It has also filed a related case in the U.S. Court of Appeals for the D.C. Circuit challenging a separate legal authority the government invoked.
The following account is based on allegations made by Anthropic in its lawsuit.
WHAT ANTHROPIC SAYS THE DISPUTE IS ABOUT
Anthropic said it spent years building Claude into the government’s most widely deployed frontier AI model, including on classified military networks, developing a specialized “Claude Gov” version and loosening many of its standard restrictions to accommodate national security work.
The conflict began in the fall of 2025 during negotiations over the Pentagon’s GenAI.mil platform, when the Department of Defense demanded Anthropic abandon its usage policy entirely and allow Claude to be used for, in the government’s words, “all lawful uses”.
Anthropic said it largely agreed, except on two points it considered non-negotiable: it would not allow Claude to be used for lethal autonomous warfare without human oversight or for mass surveillance of Americans.
The company says Claude has not been tested for those uses and cannot perform them safely. It said it also offered to help transition the work to another provider if no agreement could be reached.
Pentagon officials have offered a different account of how the dispute began. The department’s chief technology officer said publicly that tensions escalated after a U.S. raid in Venezuela, when an Anthropic executive called a counterpart at Palantir to ask whether Claude had been used in the operation.
That account does not appear in Anthropic’s complaint.
FROM ULTIMATUM TO ALL-OUT BAN
Secretary of Defense Pete Hegseth met with Anthropic CEO Dario Amodei on February 24, presenting an ultimatum: comply within four days or face one of two punishments – compulsion under the Defense Production Act, or expulsion from the defense supply chain as a ‘national security risk’.
Amodei rejected the demand publicly on February 26. The next day, before a 5:01 p.m. Eastern deadline had expired, President Donald Trump posted a directive on Truth Social ordering every federal agency to immediately cease all use of Anthropic’s technology.
In the social media post, the president characterized Anthropic as a “RADICAL LEFT, WOKE COMPANY”.

