The Line in the Sand

Once business partners, Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter stalemate in February as they renegotiated the contracts that dictate how the U.S. uses Anthropic's AI. Anthropic established a hard line against its AI being used for mass surveillance of Americans or to power autonomous weapons that can attack without human oversight. The Pentagon has argued that the Department of Defense — which President Donald Trump's administration calls the Department of War — should be permitted access to Anthropic's models for any "lawful use." Amodei wrote in a statement addressing the situation: "Anthropic understands that the Department of War, not private companies, makes military decisions... However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values."

Trump directed federal agencies to phase out their use of Anthropic tools over a six-month transition period and called the AI company, which is valued at $380 billion, a "radical left, woke company" in an all-caps social media post. The Pentagon then moved to declare Anthropic a "supply-chain risk," a designation that is usually reserved for foreign adversaries and prevents any company that works with Anthropic from doing business with the U.S. military. Anthropic rival OpenAI then swooped in and announced that it had reached an agreement allowing its own models to be deployed in classified situations. It was a shock to the tech community, since reports had indicated that OpenAI would stick to Anthropic's red lines governing use of AI for the military.

This is the defining geopolitical AI story of the quarter. It's not about technology. It's about who controls AI's constraints.

More than 30 OpenAI and Google DeepMind employees filed a statement Monday supporting Anthropic's lawsuit against the U.S. Defense Department after the federal agency labeled the AI firm a supply-chain risk. The brief states: "The government's designation of Anthropic as a supply chain risk was an improper and arbitrary use of power that has serious ramifications for our industry." Late last week, the Pentagon labeled Anthropic a supply-chain risk — usually reserved for foreign adversaries — after the AI firm refused to allow the Department of Defense (DOD) to use its technology for mass surveillance of Americans or autonomously firing weapons.

What's remarkable: Competitors rallied to Anthropic's defense, not because they agree with Anthropic's safety stance necessarily, but because they recognize that government power to designate private tech companies as "supply chain risks" is a Rubicon moment.

Sources