Anthropic Sues Pentagon Over AI Risk Label

Anthropic, the top AI startup that made Claude, is suing the Pentagon after being called a risk to the national security supply chain. This is the first time a U.S. company has been given this label, which is said to be retaliation for not lifting AI safety restrictions on autonomous weapons and mass surveillance.

Things got worse when Defense Secretary Pete Hegseth told Anthropic to take off the limits on its Claude AI model so that it could be used for “all lawful use,” even potentially deadly autonomous systems without human oversight. The CEO, Dario Amodei, turned this down because it went against the company’s main goal, which led to the end of contract talks. The Pentagon then put Anthropic on a blacklist, which means that contractors like Amazon can’t use its technology. Executives say this could cost billions of dollars in sales.

Anthropic filed two federal lawsuits: one in the U.S. District Court in San Francisco and the other in the appeals court in D.C. They want the risk label to be thrown out because it is an illegal form of ideological punishment. The 48-page complaint says that the move breaks First Amendment rights and uses rules meant for foreign enemies in the wrong way. Leaders of the company say it’s hurting business deals because the Trump administration wants military AI to be able to do anything.