Anthropic vs. Pentagon: 30+ OpenAI & DeepMind Staffers Sign Amicus Brief Against DOD
More than 30 OpenAI and Google DeepMind employees signed onto a statement supporting Anthropic's lawsuit against the Defense Department after the agency labeled the AI firm a supply-chain risk, according to court filings. More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic's lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk.
This is unusual. Competitors don't normally publicly defend each other against government pressure. But the stakes here cut through corporate rivalry.
What Triggered It
Late last week, the Pentagon labeled Anthropic a supply-chain risk — usually reserved for foreign adversaries — after the AI firm refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons.
The Employees' Position
"The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry," reads the brief, whose signatories include Google DeepMind chief scientist Jeff Dean.
Translate that: when the government can unilaterally strip a company of contractor status for refusing to build autonomous weapons, the entire industry's ability to set safety standards evaporates.
The Deeper Issue
The DOD had argued that it should be able to use AI for any "lawful" purpose and not be constrained by a private contractor.
That's the crux. "Lawful" is broad. It includes surveillance. It includes autonomy at 50,000 feet. Once the Pentagon has legal cover, contractors lose leverage to push back.
My take: This is the moment AI companies realize that safety principles only work if the government backs them. If the Pentagon can label safety-conscious firms as security threats, then safety becomes illegal. The amicus brief is smart—it makes defending Anthropic a defense of the entire industry's autonomy.
