US: Pentagon labels AI company Anthropic a supply chain risk
US: Pentagon Labels AI Firm Anthropic as Supply Chain Risk
The Pentagon has officially marked Anthropic, an artificial intelligence company, as a supply chain risk, effective immediately. This decision comes after a months-long disagreement over the safeguards implemented in Claude, Anthropic’s AI chatbot. The Trump administration announced the designation on Thursday, emphasizing concerns about the company’s influence on military operations.
Dispute Over Technology Use in Warfare
The Pentagon’s rationale centers on the safety protocols embedded in Claude, which restrict its application in war-gaming scenarios. These limitations have sparked a prolonged debate with national-security officials, who initially supported Anthropic’s efforts to secure partnerships. However, the Department of Defense now argues that such constraints could hinder the military’s ability to leverage the technology for all lawful purposes.
“We do not believe this action is legally sound, and we see no choice but to challenge it in court,” stated CEO Dario Amodei.
Amodei added that while the Pentagon’s move forces government contractors to stop using Claude in military projects, the AI can still be applied to other initiatives outside the scope of defense work. The designation follows a week of public criticism from President Donald Trump and Defense Secretary Pete Hegseth, who accused Anthropic of threatening national security.
Controversy and Counterarguments
Despite discussions between Anthropic and the Pentagon about potential compromises, the latter has dismissed ongoing negotiations. Pentagon Chief Technology Officer Emil Michael clarified that the department is not currently in talks with Anthropic. The Pentagon’s statement emphasized its stance: “The military will not allow a vendor to insert itself into the chain of command by restricting the lawful use of a critical capability and putting our warfighters at risk.”
“The narrow exceptions Anthropic sought—limiting surveillance and autonomous weapons—relate to high-level usage areas, not operational decision-making,” Amodei explained.
The controversy highlights a broader tension between technological innovation and military control, with Anthropic now preparing to contest the decision in court. The move underscores growing scrutiny of AI systems in critical defense applications.
