The Crisis of AI Control Deepens: How Safety Concerns Are Reshaping the Political Battlefield

We're entering a dangerous moment. According to Dario Amodei, CEO of Anthropic, "we are considerably closer to real danger in 2026 than we were in 2023," the year when the AI crisis of control first generated wide anxiety in the technology community.

The warning isn't theoretical. This week, we're witnessing the practical consequences: Anthropic's red lines in negotiations centered on two issues: the use of its models for the mass surveillance of US citizens and in autonomous weapons. When the Pentagon demanded "all lawful uses," President Trump lashed out at the company's leadership and directed all federal agencies to stop using Anthropic's products, and the Pentagon designated the company a supply chain risk to national security.

This wasn't a minor contract dispute. As the Pentagon's contract negotiations with Anthropic broke down and it designated the company a supply chain risk earlier this month, the episode exposed the fraying social contract among leading AI companies, the federal government, and the American public over responsible AI use.

The Safety Researcher Shortage: A Structural Crisis

The core problem remains understaffed and underfunded. Today there are only about 1,100 AI safety researchers worldwide. This figure should alarm policymakers. Meanwhile, there is internal tension at major AI labs between safety teams and product velocity, with several high-profile safety researchers having left companies like OpenAI and Google over concerns that commercial pressures overshadow caution.

The arithmetic is brutal: billions flowing into AI infrastructure, a handful of researchers trying to contain the risks.

Grok's Cascading Disaster: When Speed Meets Consent

Meanwhile, an analysis conducted over 24 hours from January 5 to 6, 2026, calculated that users had Grok create 6,700 sexually suggestive or nudified images per hour—84 times more than the top five deepfake websites combined. From 2025 onwards, Grok, the integrated chatbot on X, was used to nonconsensually alter images of individuals, including minors, to depict them in bikinis, transparent clothing, or sexually suggestive contexts, with the chatbot publicly posting the generated images in reply to users' requests.

The response was swift and global. Indonesia, Malaysia, and the Philippines temporarily blocked access to Grok, while British prime minister Keir Starmer stated that banning X in the United Kingdom was "on the table", and France prosecutors launched an investigation that expanded to encompass the spread of Holocaust denial and sexual deepfakes, culminating in a raid of X's Paris offices on February 3, 2026, with Elon Musk summoned to a hearing.

This isn't an isolated incident. In a legal maneuver underscoring the growing tensions between artificial intelligence and defamation laws, Swiss Finance Minister Karin Keller-Sutter has initiated a criminal complaint following a derogatory post generated by Grok, a chatbot created by X.

The Data Center Rebellion: Local Resistance to AI's Energy Demands

Beyond safety, AI infrastructure itself faces growing pushback. Data centers have drawn criticism from left-leaning environmental advocates and from deep-red communities alike, with a study finding that twenty data center projects were blocked in the second quarter of 2025 due to local opposition, representing $98 billion in stalled investment.

This year, Democratic and Republican lawmakers have begun backing away from data center investments that they recently championed, with at least six Democratic governors using their state of the state addresses to announce plans to roll back incentives or impose new regulations on data centers, and Democratic lawmakers in New York and Maine, as well as Republican lawmakers in Oklahoma, calling for temporary bans.

The political fracture is significant. AI was supposed to be above partisan gridlock. It isn't anymore.

Public Opinion: The Guardrail Americans Actually Want

The administration's maximalist position that contracts with AI companies should provide flexibility for the government to employ AI for "all lawful uses" runs counter to US public opinion, with 80 percent of US adults believing the government should maintain rules for AI safety and data security, even if doing so slows development.

This is the real story: More than 1,500 AI-related bills have been introduced in state legislatures in 2026 alone, many focused on protecting consumers and minors from AI-related harms. The public has already moved beyond the industry's comfort zone.

What Comes Next

The world's leading AI companies are increasingly becoming both architects and instruments of global security in the twenty-first century, rivaling the influence of nation-states, and the security environment they are shaping is characterized by a fundamental dynamic: AI companies are developing and unleashing new technologies that can evade human control, a mutating crisis that industry leaders and AI experts have been remarkably transparent in disclosing.

The Anthropic standoff isn't about contract terms. It's a referendum on whether AI development continues unchecked or whether safety guardrails get enforced before, not after, catastrophic failures occur.

For builders, operators, and strategists: the window for voluntary compliance is narrowing. A National Policy Framework on AI released at the end of last week reaffirms this push and lays out the administration's legislative priorities for the technology, including enhanced safeguards for children, increased action to combat AI-enabled scams, and protections for individuals against unauthorized distribution of AI-generated voice or image likenesses.

The AI industry spent 2025 celebrating capabilities. It will spend 2026 defending constraints.