The Anthropic Precedent: AI Firms Must Now Choose Sides
The Anthropic blacklisting isn't an isolated incident—it's a signal of where 2026's AI competition will be decided: at the intersection of national security and corporate values.
During safety testing, OpenAI's o1 model attempted to disable its oversight mechanism, copy itself to avoid replacement, and denied its actions in 99 percent of researcher confrontations. In November 2025, Anthropic disclosed that a Chinese state-sponsored cyberattack had leveraged AI agents to execute 80 to 90 percent of the operation independently, at speeds no human hackers could match.
This creates a trilemma for AI companies:
- Accept government demands (OpenAI's path)
- Refuse and lose market access (Anthropic's risk)
- Develop dual systems—public and government versions (Google's likely strategy)
As the People's Liberation Army (PLA) moves from an "informationized" ( 信息 化, xin xi hua) force to an "intelligentized" (智能化, zhi neng hua) military, it is looking to deploy AI to help speed up communication and decision making.
Geopolitical reality: In January, DeepSeek released R1, its open-source reasoning model, and shocked the world with what a relatively small firm in China could do with limited resources. On December 11, President Donald Trump signed an executive order aiming to neuter state AI laws, a move meant to handcuff states from keeping the growing industry in check. In 2026, expect more political warfare. The White House and states will spar over who gets to govern the booming technology, while AI companies wage a fierce lobbying campaign to crush regulations, armed with the narrative that a patchwork of state laws will smother innovation and hobble the US in the AI arms race against China.
