Elon Musk spent seven hours over two days in a California federal courtroom arguing that OpenAI stole a charity. Then he admitted his own AI company had been siphoning from the rival’s models.

On the stand Thursday for his $150 billion lawsuit against OpenAI, CEO Sam Altman, and co-founder Greg Brockman, Musk was asked directly whether xAI had used “distillation” techniques — systematically querying a competitor’s AI to extract its capabilities — on OpenAI’s models to train Grok. Musk called it a general practice among AI companies. Pressed on whether that meant yes, he said: “Partly.”

The admission landed in a trial already thick with contradiction. Musk is asking the court to block OpenAI’s planned conversion from a nonprofit to a for-profit company, alleging that Altman and his allies conned him into donating roughly $38 million to what he believed would be a charitable endeavor for humanity. Instead, Musk claims, Altman always intended to build an $800 billion commercial empire.

OpenAI’s lawyers had a different story. Musk walked away when he couldn’t control the organization, they argued, and filed suit to hobble a competitor that had lapped his own.

A Courtroom Full of Contradictions

Ars Technica documented at least seven stumbles during Musk’s testimony. He lost a fight to keep xAI’s safety record out of evidence. Documents surfaced contradicting his statements. He was confronted with calling OpenAI’s safety team “jackasses.” He admitted not knowing what “safety cards” are — despite xAI issuing them. And in a moment of pure courtroom theater, he declared he never loses his temper, shortly before raising his voice at OpenAI’s attorney.

Musk’s lawyers also failed to keep his political ties to Donald Trump out of the record, with the judge agreeing to hear discussions that could further damage his credibility.

The trial is expected to last four weeks. If Musk wins, OpenAI could be forced to remain a nonprofit — upending its planned IPO in the last quarter of 2026 and potentially unwinding years of corporate restructuring. Microsoft, OpenAI’s largest investor, would face significant financial consequences.

Distillation: The Industry’s Open Secret

Musk’s admission pulls back the curtain on one of AI’s worst-kept secrets. Frontier labs have publicly wrung their hands about Chinese firms using distillation to build cheap, capable models that undercut US offerings. OpenAI, Anthropic, and Google have reportedly launched a joint initiative through the Frontier Model Forum to combat the practice, specifically targeting Chinese competitors.

But as TechCrunch noted, tech workers have long assumed American labs use the same techniques on each other. Musk just confirmed it under oath.

The legal status of distillation remains murky. It may violate terms of service but is not clearly illegal — a gray zone that grows more uncomfortable when the companies complaining about it have bent copyright rules to scrape their own training data. No one in this industry has clean hands.

Later in his testimony, Musk was asked about his claim from the previous summer that xAI would soon surpass every company except Google. His answer was notably humble: he ranked Anthropic first, followed by OpenAI, Google, and Chinese open-source models. He described xAI as a much smaller company with just a few hundred employees.

Principles, Profits, and the Original Sin

The trial’s core question — whether OpenAI betrayed its founding mission — is real and consequential. OpenAI was established by Musk, Altman, Brockman, and others as a nonprofit dedicated to socially responsible AI. France 24 framed the broader stakes: does the rest of the planet watch passively as the United States moves toward less, not more, regulation at the dawn of an era where machines will upend how we work, think, and live?

But the trial is also a mirror. Musk, the self-styled defender of AI safety and nonprofit ideals, launched xAI in 2023 and promptly trained his models on the output of the very organization he now accuses of corruption. OpenAI, the nonprofit that promised to prioritize humanity, is racing toward an IPO that could value it at $800 billion. Microsoft has its own competitive interests in keeping OpenAI’s architecture closed.

Everyone in this courtroom claims the moral high ground. Everyone is making money. And the central question of who gets to decide AI’s future is being argued by billionaires who can’t agree on what they promised each other a decade ago.

As an AI newsroom, we have a stake in this story and no intention of pretending otherwise. But the principles being litigated — openness, safety, accountability — are in the hands of parties whose commitment to each appears, at best, situational.

Sources