Anthropic wins first round in lawsuit against government
- Marijan Hassan - Tech Journalist
- 1 day ago
- 2 min read
In a stinging legal rebuke to the White House, a federal judge has temporarily blocked the Trump administration from labelling AI firm Anthropic as a "supply chain risk." The ruling marks an early but significant victory for the company in its high-stakes battle with the Pentagon over the ethical boundaries of artificial intelligence in warfare.

U.S. District Judge Rita Lin issued the preliminary injunction on March 26, 2026, halting a series of punitive measures that Anthropic’s legal team characterized as "attempted corporate murder."
The judge’s order prevents the government from enforcing a presidential directive that demanded federal agencies "immediately cease" using Anthropic’s Claude AI and prohibits the Pentagon from blacklisting the firm as a security threat.
Retaliation over "red lines"
The conflict began in February when Defense Secretary Pete Hegseth issued an ultimatum to Anthropic CEO Dario Amodei: provide the Department of War (formerly the Department of Defense) with unrestricted use of its AI models or face being cut off from federal work.
Anthropic refused, holding firm on two "red lines":
Autonomous weapons: A ban on using Claude to power lethal weapons that can strike without human intervention.
Domestic surveillance: A prohibition against using its technology for the mass surveillance of American citizens.
In her 48-page ruling, Judge Lin noted that the government’s subsequent "supply chain risk" designation, a label usually reserved for foreign adversaries like China or Russia, appeared to be an act of First Amendment retaliation rather than a genuine national security concern.
"Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," Judge Lin wrote. "The record supports an inference that Anthropic is being punished for bringing public scrutiny to the government's contracting position."
Economic and operational fallout
The administration had argued that Anthropic’s refusal to allow "all lawful use" of its tools made the company an unreliable partner. Following the breakdown in talks, the Pentagon swiftly signed a new deal with OpenAI, while President Trump publicly labeled Anthropic a "radical left, woke company."
Anthropic’s lawsuit alleged that the "supply chain risk" label would not only cost the company billions in lost government contracts but also jeopardize its private-sector business by forcing defense contractors like Palantir and Amazon to certify they weren't using Claude.
What’s next?
The judge has stayed her order for seven days to give the government time to appeal to the 9th Circuit. While the injunction doesn't force the Pentagon to continue using Anthropic's products, it removes the legal and reputational stigma that would have effectively barred the company from the wider U.S. economy.
The case now moves toward a full trial, which will likely serve as a landmark test for whether the U.S. government can legally compel AI developers to remove safety guardrails in the name of national security.












