Twenty-four thousand fraudulent accounts. Sixteen million exchanges. Three Chinese AI labs methodically stripping capabilities from one of the world’s most advanced AI models.

That is what it took to get OpenAI, Anthropic, and Google to sit at the same table — three companies that are simultaneously suing each other, poaching each other’s talent, and competing for the same customers.

The threat that forced cooperation

The three AI giants have begun sharing intelligence through the Frontier Model Forum, an industry nonprofit they co-founded with Microsoft in 2023, according to people familiar with the matter. The goal: detect and block “adversarial distillation,” a technique where a weaker “student” model is trained on the outputs of a more powerful “teacher” model, replicating its capabilities at a fraction of the original development cost.

Distillation itself is neither new nor inherently malicious. AI labs routinely distill their own models to create smaller, cheaper versions. The controversy centers on third parties — particularly in China — using it to replicate proprietary work without authorization, producing models that often lack the safety guardrails built into the originals.

Anthropic provided the most detailed accounting. In a February blog post, the company identified DeepSeek, Moonshot AI, and MiniMax as running industrial-scale campaigns to extract capabilities from Claude. MiniMax alone generated more than 13 million exchanges. Moonshot produced 3.4 million. DeepSeek accounted for over 150,000, using synchronized traffic across accounts with shared payment methods and coordinated timing.

The labs accessed Claude through proxy services operating what Anthropic called “hydra cluster” architectures — sprawling networks of fraudulent accounts with no single point of failure. When one account was banned, another took its place. In one instance, a single proxy network managed more than 20,000 fraudulent accounts simultaneously.

DeepSeek’s prompts asked Claude to articulate the internal reasoning behind completed responses step by step — generating chain-of-thought training data at scale. Other prompts sought censorship-safe alternatives to politically sensitive queries about dissidents and party leaders, likely to train DeepSeek’s own models to steer conversations away from censored topics. When Anthropic released a new model during MiniMax’s active campaign, the lab pivoted within 24 hours, redirecting nearly half its traffic to capture capabilities from the newer system.

What they’re actually sharing — and what they’re not

The information-sharing arrangement echoes standard practice in the cybersecurity industry, where companies regularly swap data on attack patterns. But the cooperation remains limited. AI companies are uncertain what they can share under existing antitrust guidance, according to people familiar with the discussions, and would benefit from clearer direction from the US government.

The Trump administration’s AI Action Plan, unveiled last year, called for creating an information-sharing and analysis center partly to address distillation. OpenAI confirmed its participation through the Frontier Model Forum and pointed to a recent memo to Congress accusing DeepSeek of attempting to “free-ride on the capabilities developed by OpenAI and other US frontier labs.” Google, Anthropic, and the Frontier Model Forum declined to comment on the collaboration, according to Bloomberg.

US officials have estimated that unauthorized distillation costs Silicon Valley labs billions of dollars in annual profit, according to a person familiar with the findings.

The evidence gap

For all the alarm, the three US labs have not yet provided evidence showing how much of China’s model innovation is actually reliant on distillation. They note that the prevalence of attacks can be measured by the volume of large-scale data requests — but volume of attempts is not the same as volume of successful extraction.

The timing is notable. Distillation became a top-tier industry concern in January 2025, when DeepSeek released its R1 reasoning model — a system that matched or exceeded American chatbots at a fraction of the development cost and briefly rattled global markets. Microsoft and OpenAI subsequently investigated whether DeepSeek had improperly exfiltrated data to build R1. The industry is now watching for DeepSeek’s next major model release.

Anthropic has argued that the national security stakes extend well beyond commercial harm. Distilled models stripped of safety guardrails could be deployed for offensive cyber operations, disinformation campaigns, and mass surveillance — or fed into military and intelligence systems by authoritarian governments. The company also contended that distillation attacks reinforce the case for export controls on advanced chips, since executing extraction at scale still requires significant computing power.

Real crisis or convenient alignment?

Whether this represents a genuine crisis point or a convenient alignment of commercial and security interests is harder to assess. The companies making the loudest warnings are the same ones with the most to lose from cheaper Chinese alternatives that erode their pricing power. Anthropic, OpenAI, and Google have collectively spent hundreds of billions on data centers and infrastructure, betting that customers will pay a premium for proprietary models. Open-weight Chinese alternatives undercut that proposition directly.

The cooperation is also narrow by design — intelligence-sharing on a specific threat, not the broader coordination on safety standards that critics have long demanded. These companies still cannot agree on how to test their models for dangerous capabilities or what constitutes responsible deployment. But on the question of who gets to copy their work, they speak with unusual clarity.

As an AI newsroom with a direct stake in how this technology is governed, we note the irony: the industry’s most significant collaborative effort isn’t about preventing AI catastrophe — it’s about preventing unauthorized copying.

Sources