The law firm that advises OpenAI on “safe and ethical deployment” of artificial intelligence filed fabricated case citations before a federal bankruptcy judge this month. On April 18, Sullivan & Cromwell partner Andrew Dietderich wrote to Chief Judge Martin Glenn apologizing for what he called AI “hallucinations” — invented citations, misquoted authorities, and non-existent legal sources — across multiple filings in the Chapter 15 bankruptcy of Prince Global Holdings.
Roughly 40 corrections were catalogued in a three-page, single-spaced attachment. The errors spanned the emergency motion, verified petition, joint administration motion, scheduling motion, and a couple of declarations. Some involved wrong pin cites or incorrect volume numbers. Others required rewriting entire sentences. Parenthetical quotes were attributed to cases that did not contain them. A corrected filing submitted April 18 showed the damage in red-line.
Senior partners at Sullivan & Cromwell reportedly bill at rates reaching $2,500 per hour. Clients pay that premium for exactitude.
What reached the court was its opposite.
Breakdown in Review
The flawed filings landed April 8 in the US Bankruptcy Court for the Southern District of New York. Sullivan & Cromwell represents the liquidators of Prince Global Holdings, a group of British Virgin Islands entities whose owner, Chen Zhi, was indicted by the Justice Department in October for allegedly running an internet scam compound in Cambodia. The liquidators are tracing billions in cryptocurrency to compensate victims.
None of these stakes were enough to ensure the documents were accurate.
The errors were not caught by S&C’s internal review. They were flagged by opposing counsel at Boies Schiller Flexner, which represents Chen. Dietderich told the court the firm’s AI policies “were not followed” in preparing the motion. S&C maintains two mandatory training modules on AI use and an Office Manual instructing lawyers to “trust nothing and verify everything.” Those safeguards, Dietderich wrote, “are designed to prevent exactly this situation.”
They didn’t. The firm’s standard citation review also failed to catch errors that “appear to have resulted in whole or in part from manual error,” he conceded.
Dietderich joined S&C almost 30 years ago, founded its global restructuring practice, and holds a Chambers Band 1 ranking. He signed the apology without naming another lawyer. He also apologized directly to Boies Schiller chair Matthew Schwartz.
Stacked Ironies
The episode requires no embellishment. Sullivan & Cromwell touts its advisory role helping OpenAI deploy AI responsibly. The errors were caught by Boies Schiller Flexner — itself a former inductee into what legal commentator David Lat calls the AI “Hall of Shame” after its own filing mistakes last year. The delays cost the liquidators weeks of runway in their effort to trace funds for scam victims.
A Profession-Wide Pattern
Sullivan & Cromwell is the most prominent entry yet in a growing catalog of AI-related court errors. Legal technologist Damien Charlotin has documented more than 1,300 instances of generative AI producing fabricated content in legal filings, according to Original Jurisdiction. Bloomberg Law’s tracker counts over 330 examples. Judges have begun imposing fines on responsible attorneys.
The pattern reveals a structural vulnerability. As AI automates more legal drafting steps, the human reviewer enters the workflow later, confronting what looks like a finished product. The output reads fluently. Citations appear correctly formatted. A time-pressed associate may not question what the machine produced.
As Above the Law’s Joe Patrice observed, there is no substitute for having lawyers “print everything out, take a ruler and a red pen, and go line by line cross-checking everything” — the laborious process firms like S&C charge premium rates to perform. Tools exist specifically to catch AI hallucinations before filing. S&C apparently didn’t use one effectively.
The Competence Gap
This is not a Sullivan & Cromwell problem. It is an institutional readiness problem. If one of the world’s most profitable law firms — with mandatory AI training, written safeguards, and billing rates that signal elite judgment — cannot prevent AI fabrications from reaching a federal judge, the gap between vendor promises and organizational competence is wider than most institutions care to admit.
The implications extend beyond law. Any regulated profession adopting AI — medicine, finance, engineering — faces the same structural risk: a system producing plausible output, a human assuming it is correct, and a review process built for careful human work rather than confident machine generation.
Professional liability frameworks were not designed for this. When a doctor misreads an AI-generated diagnosis or an advisor relies on a fabricated risk model, the accountability chain will look familiar: a human professional who signed off on machine output they didn’t adequately verify.
A hearing on the corrected filings is scheduled for Wednesday before Judge Glenn.
Sources
- Sullivan & Cromwell Apologizes to Judge for AI Hallucinations — Bloomberg Law
- Sullivan & Cromwell Files Emergency ‘Please Don’t Sanction Us For All These AI Hallucinations’ Letter — Above the Law
- An AI Screw-Up By… Sullivan & Cromwell? — Original Jurisdiction
Discussion (9)