Pentagon Escalates Fight with AI Company Anthropic by Declaring It a Supply-Chain Risk
After weeks of intense negotiations that ultimately collapsed, the United States Department of Defense has taken the unprecedented step of formally labeling American artificial intelligence company Anthropic as a "supply-chain risk." This dramatic move significantly escalates the ongoing conflict between the Pentagon and the AI firm over acceptable use policies for its advanced AI program, Claude, and sets the stage for a potentially lengthy court battle.
Unprecedented Designation for an American Company
The decision, which was first reported by The Wall Street Journal on Thursday citing a source familiar with the matter, carries substantial consequences. This designation will effectively bar defense contractors from working with the U.S. government if they incorporate Anthropic's Claude AI into their products or services. While this label is typically reserved for foreign companies with connections to adversarial governments, this marks the first public instance of an American technology firm receiving such a classification from the Pentagon.
In a company blog post published on Thursday evening, Anthropic CEO Dario Amodei confirmed that the notification from the Pentagon was received on Wednesday. "As we wrote on Friday, we do not believe this action is legally sound, and we see no choice but to challenge it in court," Amodei stated, signaling the company's intent to pursue legal recourse against the government's decision.
Core of the Conflict: Ethical AI Use Policies
The fundamental disagreement centers on Anthropic's firm refusal to permit the Pentagon to utilize Claude for two specific, high-stakes applications:
- The development or deployment of autonomous lethal weapons systems that operate without meaningful human oversight.
- The implementation of mass surveillance programs.
The Pentagon has countered that Anthropic's insistence on maintaining control over how the government uses its technology grants an unacceptable level of power to a private corporation. Conversely, Anthropic remained unconvinced that the U.S. government would respect its established ethical red lines regarding AI application. Negotiations deteriorated as the Pentagon increasingly threatened to deploy the supply-chain risk designation should Anthropic fail to comply with its demands. Following Anthropic's public announcement last Thursday that it would not acquiesce, the Pentagon followed through on its threat.
(The Pentagon declined to comment on the record for this report. Anthropic did not immediately respond to a request for comment.)
Uncertain Scope of Enforcement and Military Dependencies
The breadth of the Pentagon's intended enforcement of this designation remains unclear. On Friday, when announcing the intent to label Anthropic a risk, Defense Secretary Pete Hegseth made a sweeping statement, indicating that any company engaged in "any commercial activity" with Anthropic—even outside of their Pentagon-related work—would face cancellation of their defense contracts. Anthropic responded at the time, arguing that such a broad interpretation and application of the law would be unlawful.
Secretary Hegseth and President Donald Trump have established a six-month deadline for Anthropic to remove Claude from all government systems. However, disentangling the AI from military applications may prove exceptionally difficult. Recent events underscore this challenge; after the U.S. executed a targeted missile strike over the weekend that killed Iranian Supreme Leader Ayatollah Ali Khamenei, subsequent reports indicated that intelligence tools powered by Claude played a major and instrumental role in the mission's success, highlighting the program's deep integration into defense operations.
The standoff represents a critical juncture in the relationship between cutting-edge AI developers and national security agencies, raising profound questions about ethics, control, and the limits of corporate authority over technology used by the state.



