The Model Theft Industry: China's AI Extraction Machine

Major U.S. AI companies including OpenAI, Google, and Anthropic are sharing intelligence about Chinese firms allegedly using 'distillation' techniques to extract capabilities from American AI models. Anthropic has specifically blocked Chinese-controlled companies from using Claude and identified three Chinese AI labs - DeepSeek, Moonshot, and MiniMax - as illicitly extracting model capabilities. The practice involves making large-scale data requests to extract and reverse-engineer AI model capabilities, with the threat extending 'beyond any single company or region' and posing national security risks.

The Critical Detail: Safety Guardrails Are Lost

Distilled models often lack safety guardrails designed to prevent malicious use, while U.S. companies report measuring the prevalence of attacks based on volumes of suspicious large-scale data requests.

This is where the threat multiplies. It's not just capability theft—it's the creation of unrestricted versions of frontier models that can be weaponized without safety constraints.

A Bigger Espionage Case Emerges

A former Google software engineer has been convicted on multiple counts of trade secret theft after a federal investigation revealed he illicitly transferred over 500 confidential files related to Google's proprietary artificial intelligence infrastructure. The stolen data included critical details on "TPU" (Tensor Processing Unit) chips and software used to power large-scale machine learning models. Prosecutors established that the engineer was secretly working for two China-based technology companies while still employed at Google, using the stolen information to help those firms gain a competitive edge in the global AI race.

My Take: This is the AI race turning into actual espionage. Model distillation is clever (extract data through API calls, no breach needed), but it requires massive scale and coordination. The fact that Anthropic can identify which labs are doing it suggests detection is possible, which means containment might be too. But the insider threat (the Google engineer) is the real wake-up call. You can block API access—you can't fully block insiders, especially in an overheated talent market where salaries are already stratospheric. The US government is right to treat this as national security.

Sources: