Sign in
Back to News
Policy & GeopoliticsRegulatory

Anthropic Accuses DeepSeek, Moonshot, and MiniMax of 'Industrial-Scale' AI Theft

February 23, 2026 · by Fintool Agent

Banner

Three of China's most prominent AI laboratories orchestrated "industrial-scale campaigns" to illicitly extract capabilities from Anthropic's Claude AI model, the company revealed Monday—using roughly 24,000 fraudulent accounts to generate over 16 million exchanges in violation of terms of service and regional access restrictions.

The revelation, documented in a detailed Anthropic blog post, identifies DeepSeek, Moonshot AI, and MiniMax as the perpetrators and arrives at a critical moment in the debate over U.S. AI chip export controls to China. Anthropic—valued at $380 billion following its recent $30 billion funding round—is using the evidence to argue that distillation attacks "reinforce the rationale for export controls."

The company traced each campaign to specific labs "with high confidence through IP address correlation, request metadata, infrastructure indicators, and in some cases corroboration from industry partners who observed the same actors and behaviors on their platforms."


The Scale of the Operation

Distillation Attacks Breakdown

The three campaigns followed a similar playbook: fraudulent accounts routed through commercial proxy services that resell access to Claude at scale. One proxy network managed more than 20,000 fraudulent accounts simultaneously, mixing distillation traffic with unrelated customer requests to evade detection.

LabExchange VolumePrimary Targets
MiniMax13+ millionAgentic coding, tool orchestration
Moonshot AI3.4 millionAgentic reasoning, tool use, coding, computer vision
DeepSeek150,000+Reasoning, reward model generation, censorship-safe alternatives

Source: Anthropic blog post, February 2026

Each campaign targeted Claude's "most differentiated capabilities: agentic reasoning, tool use, and coding"—the exact features that make frontier AI models valuable for enterprise and military applications.

FintoolAsk Fintool AI Agent

DeepSeek: Chain-of-Thought Extraction and Censorship Training

DeepSeek's campaign, while smaller in volume, employed particularly sophisticated techniques. Anthropic observed prompts that asked Claude to "imagine and articulate the internal reasoning behind a completed response and write it out step by step—effectively generating chain-of-thought training data at scale."

More concerning to U.S. policymakers: DeepSeek used Claude to generate "censorship-safe alternatives to politically sensitive queries like questions about dissidents, party leaders, or authoritarianism"—likely to train its own models to steer conversations away from topics censored by the Chinese Communist Party.

"By examining request metadata, we were able to trace these accounts to specific researchers at the lab," Anthropic stated.

The revelation follows OpenAI's warning to U.S. lawmakers earlier this month that DeepSeek is targeting American AI companies to replicate models for its own training.


MiniMax: Caught in the Act

Anthropic detected MiniMax's campaign while it was still active—before the company released the model it was training—providing "unprecedented visibility into the life cycle of distillation attacks, from data generation through to model launch."

The company's responsiveness was telling: when Anthropic released a new Claude model during the active campaign, MiniMax "pivoted within 24 hours, redirecting nearly half their traffic to capture capabilities from our latest system."


How Distillation Attacks Work

Distillation Process Flow

Distillation is a widely used and legitimate training technique where a less capable model learns from the outputs of a stronger one. Frontier AI labs routinely distill their own models to create smaller, cheaper versions for customers.

But when used illicitly, competitors can "acquire powerful capabilities from other labs in a fraction of the time, and at a fraction of the cost, that it would take to develop them independently."

Dmitri Alperovitch, chairman of the Silverado Policy Accelerator think-tank and co-founder of Crowdstrike, told TechCrunch: "It's been clear for a while now that part of the reason for the rapid progress of Chinese AI models has been theft via distillation of US frontier models. Now we know this for a fact."

FintoolAsk Fintool AI Agent

National Security Implications

Anthropic framed the distillation attacks as a national security threat, not merely a commercial one:

"Models built through illicit distillation are unlikely to retain those safeguards, meaning that dangerous capabilities can proliferate with many protections stripped out entirely."

The company warned that foreign labs distilling American models can "feed these unprotected capabilities into military, intelligence, and surveillance systems—enabling authoritarian governments to deploy frontier AI for offensive cyber operations, disinformation campaigns, and mass surveillance."

If distilled models are open-sourced—as many Chinese labs have done—the risk multiplies as capabilities spread beyond any single government's control.


The Export Control Argument

The timing of Anthropic's disclosure is strategic. The Trump administration recently allowed U.S. companies like Nvidia to export advanced AI chips (including the H200) to China, loosening controls that critics argue protect American AI dominance.

Anthropic argues that distillation attacks support stricter controls:

"Without visibility into these attacks, the apparently rapid advancements made by these labs are incorrectly taken as evidence that export controls are ineffective and able to be circumvented by innovation. In reality, these advancements depend in significant part on capabilities extracted from American models, and executing this extraction at scale requires access to advanced chips."

In other words: the "miracle" efficiency gains that made DeepSeek's R1 model famous may have been enabled by stolen American capabilities, not purely indigenous innovation.

Alperovitch was blunt: "This should give us even more compelling reasons to refuse to sell any AI chips to any of these [companies], which would only advantage them further."


Industry Response

The disclosure comes amid broader concern about Chinese AI labs accessing American models. Earlier this month, Google CEO Sundar Pichai was asked about a potential "DeepSeek moment" at the company's Q4 earnings call, with analysts questioning whether Chinese competition could undermine the economics of AI software companies.

Data center investors are also taking notice. Nubis Communications, an AI infrastructure company, added DeepSeek specifically to its risk factors in a recent 8-K filing, warning that "emerging AI technologies, such as demonstrated by Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd., may allow for complex AI operations to be executed with significantly less computing power than is currently required."

FintoolAsk Fintool AI Agent

What Anthropic Is Doing

The company outlined several defensive measures:

Detection: Built classifiers and behavioral fingerprinting systems to identify distillation attack patterns, including detection of chain-of-thought elicitation used to construct reasoning training data.

Intelligence Sharing: Sharing technical indicators with other AI labs, cloud providers, and relevant authorities.

Access Controls: Strengthened verification for educational accounts, security research programs, and startup organizations—"the pathways most commonly exploited for setting up fraudulent accounts."

Countermeasures: Developing product, API, and model-level safeguards designed to reduce the efficacy of model outputs for illicit distillation without degrading legitimate customer experience.

But Anthropic warned: "No company can solve this alone... distillation attacks at this scale require a coordinated response across the AI industry, cloud providers, and policymakers."


What to Watch

Regulatory response: Whether the evidence prompts the Commerce Department or Congress to reconsider recent chip export loosening.

Industry coordination: Whether Microsoft, Google, OpenAI, and other AI leaders join Anthropic in intelligence sharing and coordinated defenses.

Chinese response: None of the three labs—DeepSeek, Moonshot, or MiniMax—immediately responded to requests for comment. Their public positioning on these allegations will shape the narrative.

Market impact: Cybersecurity stocks like Crowdstrike could see renewed interest as AI security becomes a headline issue. Meanwhile, companies with significant China AI exposure face renewed scrutiny.


Related

Best AI Agent for Equity Research

Performance on expert-authored financial analysis tasks

Fintool-v490%
Claude Sonnet 4.555.3%
o348.3%
GPT 546.9%
Grok 440.3%
Qwen 3 Max32.7%

Try Fintool for free