Anthropic, an artificial intelligence company, has won an early victory in its legal dispute with the Trump administration, after a federal judge issued an injunction against the government’s order labelling the firm a “supply chain risk.”
In February, US President Donald Trump and Defense Secretary Pete Hegseth announced that the Department of Defense would cut ties with Anthropic after the company refused to grant unrestricted military access to its Claude AI model. The restrictions imposed by Anthropic barred use in lethal autonomous weapons without human oversight and prohibited mass surveillance of Americans. Following the refusal, the government designated Anthropic a “supply chain risk to national security” and ordered federal agencies to stop using Claude.
Judge Rita F. Lin of the Northern District of California criticized the government’s actions during the hearing, calling them an apparent attempt “to cripple Anthropic” and “chill public debate” because the company raised concerns over military use of its technology. Lin described the measures as “classic First Amendment retaliation,” noting that they appeared arbitrary and capricious.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the US for expressing disagreement with the government,” Judge Lin wrote in her ruling. She also highlighted that the rare military authority invoked by Hegseth had historically been applied only to foreign adversaries, raising questions about the propriety of its use against a domestic company.
Anthropic filed two separate lawsuits against the administration. One challenges the designation as a supply chain risk, while the second alleges a violation of the company’s First Amendment right to free speech. The injunction ensures that Anthropic’s technology can continue to be used by the government and by private contractors working with the Department of Defense while the legal process continues.
Legal analysts say the ruling underscores tensions between government oversight of AI technologies and the rights of private companies to limit the use of their products in military applications. “This case highlights the balance between national security concerns and protecting the free expression of companies developing advanced technologies,” said one technology law expert.
Anthropic has not commented publicly on the ruling. Euronews Next has reached out to the company for a statement.
The case is expected to proceed over the coming months, and its outcome may set a precedent for how far the US government can regulate AI companies and the limits of executive power in national security contexts.