Sam Altman said the agreement includes safeguards aligned with the company’s safety principles. 
Sam Altman said the agreement includes safeguards aligned with the company’s safety principles. Artificial intelligence firm OpenAI said it has reached an agreement with the US Department of War to deploy its models on classified networks, while rival Anthropic warned it could be designated a “supply chain risk” after refusing to allow certain military uses of its technology.
“Tonight, we reached an agreement with the Department of War to deploy our models in their classified network,” OpenAI CEO Sam Altman wrote in a post on X on 28 February.
Altman said the agreement includes safeguards aligned with the company’s safety principles. “Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” he wrote, adding that the department “agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
OpenAI said it would implement technical controls around the deployment. “We also will build technical safeguards to ensure our models behave as they should,” Altman said, adding that the company would deploy field deployment engineers and operate “on cloud networks only.”
The company called for similar conditions across the industry. “We are asking the DoW to offer these same terms to all AI companies,” Altman wrote, adding that OpenAI wants “things de-escalate away from legal and governmental actions and towards reasonable agreements.”
The announcement underscores the growing role of advanced AI systems in military operations, intelligence analysis and logistics, areas governments increasingly view as strategically critical.
Separately, Anthropic said it was facing possible punitive action from the same department after negotiations stalled over two proposed uses of its Claude model.
“Secretary of War Pete Hegseth shared on X that he is directing the Department of War to designate Anthropic a supply chain risk,” the company said in a statement.
Anthropic said the dispute centered on its refusal to permit “the mass domestic surveillance of Americans and fully autonomous weapons.”
“We do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons,” the company said, adding that such use “would endanger America’s warfighters and civilians.”
It also argued that widespread domestic surveillance would violate civil liberties. “We believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights,” the statement said.
The company warned that the designation would be unprecedented. “Designating Anthropic as a supply chain risk would be an unprecedented action, one historically reserved for US adversaries, never before publicly applied to an American company,” it said.
Anthropic added that it would challenge any such move legally. “No amount of intimidation or punishment from the Department of War will change our position on mass domestic surveillance or fully autonomous weapons. We will challenge any supply chain risk designation in court.”
The company said its services for most customers would remain unaffected even if the designation proceeds, noting that any restrictions would apply only to Department of War contract work.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine