The Pentagon Tried to Blacklist Its Own AI Supplier and Lost in Court

- The U.S. Pentagon officially labelled Anthropic, the maker of the Claude AI, a “supply chain risk,” a designation typically reserved for foreign adversaries like China or Russia.
- The reason: Anthropic CEO Dario Amodei refused to allow the U.S. military to use Claude for mass surveillance of Americans or to power fully autonomous weapons with no human oversight.
- A California federal judge blocked the Pentagon’s move in late March, ruling Anthropic was likely to win on nearly every legal argument it raised.
- Despite the public fallout, Claude is already active in U.S. military operations in Iran, running inside Palantir’s battlefield AI system used by American forces in the Middle East.
- OpenAI quietly took the opposite path, signing a Pentagon deal allowing military use of its AI for “all lawful purposes” language that alarmed even some of its own employees.
The U.S. Department of Defence formally notified Anthropic that the company and its products had been designated a supply chain risk, a label with enormous consequences. Supply chain risk designations are typically reserved for foreign adversaries, and the label requires any company or agency doing work with the Pentagon to certify that it doesn’t use Anthropic’s models. This was not a quiet paperwork dispute. It was a public power play.
Anthropic CEO Dario Amodei had refused to allow the military to use Claude for mass surveillance of Americans or to power fully autonomous weapons with no humans involved in targeting or firing decisions. The Pentagon’s response was swift and aggressive. President Trump posted on Truth Social, calling Anthropic staff “Left-wing nutjobs” and directing every federal agency to stop using the company’s AI, followed shortly by Defence Secretary Pete Hegseth announcing he would direct the Pentagon to apply the supply chain risk label. Amodei later said his refusal to praise or donate to Trump likely contributed to the escalation.
The strategy backfired badly in court. A California judge temporarily blocked the Pentagon from enforcing the designation, and her 43-page ruling found that the government had essentially disregarded the established legal process for contract disputes, then used social media posts from officials that would later contradict the positions its lawyers took in court. The government’s own lawyers admitted in court that Hegseth’s declaration that no military contractor could “conduct any commercial activity with Anthropic” had “absolutely no legal effect at all.” Dean Ball, who worked on AI policy inside the Trump administration but filed a brief supporting Anthropic, described the ruling as devastating for the government.
Here is the real twist: Claude is one of the main tools installed in Palantir’s Maven Smart System, which military operators in the Middle East rely on. The U.S. military is using Claude to assess intelligence, identify targets, and simulate battle scenarios in Iran, even as the Pentagon publicly tried to cut ties with the company. Even President Trump acknowledged the military needed six months to stop using Claude, which says everything about how deeply embedded the technology already is.
The broader concern goes beyond Anthropic. Countries are rushing to integrate AI into military systems, with no clarity yet on how accurate these systems are or how they make decisions. AI models from OpenAI, Anthropic, and Google opted to use nuclear weapons in simulated war games in 95% of cases in a recent study. Steve Feldstein of the Carnegie Endowment for International Peace warns that untested, highly lethal AI systems could lead to catastrophic results, including strikes on civilian infrastructure, and that human accountability in warfare is already being quietly eroded. For Africa, where drone warfare is already reshaping conflicts from Sudan to the Sahel, the lack of any global rulebook on AI weapons is not an abstract problem. It is arriving fast.





