Summary

Reuters reported that Sullivan & Cromwell apologized to a federal bankruptcy judge after submitting a filing that contained inaccurate citations and other errors generated by AI. The article turns a familiar hallucination warning into a concrete, high-status legal-failure case: opposing counsel caught the problems, the firm admitted its AI policies were not followed, and it had to file a correction.

Why It Matters

For lawyers, this is a direct operational warning about where AI failure actually enters practice:

  • drafting and cite-checking court papers
  • supervision of junior lawyers and support staff using AI tools
  • law-firm AI governance and review policies
  • malpractice, sanctions, and reputational risk

It is especially useful because it shows that having an AI policy is not enough if the workflow and review controls fail in practice.

What the Source Says

Reuters reported that the mistakes were identified by Boies Schiller Flexner and described in an April 18, 2026 letter from Andrew Dietderich to Chief Judge Martin Glenn in the U.S. Bankruptcy Court in Manhattan. The article says the firm told the court its AI safeguards were designed to prevent this exact problem, but those policies were not followed and a secondary review process also failed to catch the inaccurate citations before filing. Reuters also tied the incident to the broader pattern of judges disciplining lawyers who do not fully vet AI-assisted work.