In a statement, Anthropic said that the Trump administration's move to designate the company as a supply chain risk was "unprecedented". 
In a statement, Anthropic said that the Trump administration's move to designate the company as a supply chain risk was "unprecedented". US-based artificial intelligence company Anthropic on Saturday said it will challenge in court any move by the US government to designate it a “supply chain risk”, after Secretary of War Pete Hegseth announced plans to take action against the firm over disagreements on the use of its AI model, Claude.
Hegseth had posted on X earlier in the day that he was directing the Department of War to label the company a supply chain risk. The proposed action follows months of negotiations between the two sides that reportedly stalled over two key exceptions sought by Anthropic: refusing to allow the use of its AI for mass domestic surveillance of Americans and for fully autonomous weapons.
In a statement, Anthropic said that the Trump administration's move to designate the company as a supply chain risk was "unprecedented". The company said it had not received any formal communication from either the Department of War or the White House regarding the status of the designation.
"Designating Anthropic as a supply chain risk would be an unprecedented action—one historically reserved for US adversaries, never before publicly applied to an American company. We are deeply saddened by these developments. As the first frontier AI company to deploy models in the US government’s classified networks, Anthropic has supported American warfighters since June 2024 and has every intention of continuing to do so," the company said.
Anthropic maintained that it supports all lawful uses of AI for national security, with the exception of the two contested areas. According to the company, those exceptions have not affected any government mission to date.
Explaining its stance, Anthropic said current “frontier” AI models are not reliable enough to be deployed in fully autonomous weapons systems. "Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights."
The company called the potential designation “unprecedented”, noting that supply chain risk labels have historically been applied to US adversaries and not to domestic firms. Anthropic said it has supported US warfighters since June 2024 and was the first frontier AI company to deploy models within classified US government networks.
Hegseth has suggested the designation could bar military contractors from doing business with Anthropic. The company disputed that interpretation, arguing that under 10 USC 3252, any supply chain risk designation would only apply to the use of Claude in Department of War contracts and would not restrict contractors from using its services for other clients.
Anthropic said individual and commercial customers would see no disruption in access to Claude through its API, website or products. It added that its sales and support teams remain available to address concerns, while reaffirming its intention to continue supporting US military operations within what it described as clear ethical boundaries.