OpenAI and Google employees defend Anthropic in lawsuit against U.S government
“The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry,” says the employee's brief.

- Mar 10, 2026,
- Updated Mar 10, 2026 11:30 AM IST
Anthropic has filed a lawsuit against the U.S. Defence Department after being labelled as a “supply-chain risk”. Now, over 30 employees from OpenAI and Google DeepMind have submitted an amicus brief on March 9, 2026, in support of Anthropic in its dispute with the US government.
The amicus brief reads, “The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.”
“We are concerned that the Defendants’ action harms public debate on the risks and benefits of AI as well as U.S. competitiveness in the field of AI and innovation more broadly,” it added. The letter was spotted hours after Anthropic filed two lawsuits against the DOD and federal agencies.
In the court filing, the Google and OpenAI employees make the point that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply cancelled the contract and purchased the services of another leading AI company.”
Google and OpenAI believe the Pentagon could have ended the contract if they were not satisfied with the expectations of the contract, and that they should have hired another AI company to provide similar services. Which the US government eventually did so by partnering with OpenAI.
The brief further said, “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.”
It also defends Anthropic’s safety restrictions (mass surveillance and autonomous weapons) to prevent dangerous or harmful uses of AI. The filing also states that currently, there are no strong laws regulating AI. Due to this, the rules and safeguards built directly into AI systems by companies become very important to prevent serious misuse of AI technology.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
Anthropic has filed a lawsuit against the U.S. Defence Department after being labelled as a “supply-chain risk”. Now, over 30 employees from OpenAI and Google DeepMind have submitted an amicus brief on March 9, 2026, in support of Anthropic in its dispute with the US government.
The amicus brief reads, “The government’s designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry.”
“We are concerned that the Defendants’ action harms public debate on the risks and benefits of AI as well as U.S. competitiveness in the field of AI and innovation more broadly,” it added. The letter was spotted hours after Anthropic filed two lawsuits against the DOD and federal agencies.
In the court filing, the Google and OpenAI employees make the point that if the Pentagon was “no longer satisfied with the agreed-upon terms of its contract with Anthropic,” the agency could have “simply cancelled the contract and purchased the services of another leading AI company.”
Google and OpenAI believe the Pentagon could have ended the contract if they were not satisfied with the expectations of the contract, and that they should have hired another AI company to provide similar services. Which the US government eventually did so by partnering with OpenAI.
The brief further said, “If allowed to proceed, this effort to punish one of the leading U.S. AI companies will undoubtedly have consequences for the United States’ industrial and scientific competitiveness in the field of artificial intelligence and beyond.”
It also defends Anthropic’s safety restrictions (mass surveillance and autonomous weapons) to prevent dangerous or harmful uses of AI. The filing also states that currently, there are no strong laws regulating AI. Due to this, the rules and safeguards built directly into AI systems by companies become very important to prevent serious misuse of AI technology.
For Unparalleled coverage of India's Businesses and Economy – Subscribe to Business Today Magazine
