Summary
Reuters reported on the June 2023 Mata v. Avianca sanctions order, in which a federal judge penalized New York lawyers for filing a brief that cited fictitious cases generated by ChatGPT. This remains one of the clearest foundational legal-AI stories in the modern record because it established a lasting benchmark: AI assistance is not inherently forbidden, but lawyers remain fully responsible for verifying citations, understanding what they file, and correcting the record with candor.
Why It Matters
For lawyers, this story is directly operational:
- it defines a real malpractice and sanctions pathway created by careless AI-assisted legal research
- it clarifies that delegation to AI does not displace the lawyer's professional gatekeeping role
- it serves as a durable training example for legal research, drafting, supervision, and court-filing review controls
- it explains why firm AI policies need verification steps, not just generic warnings
This is a benchmark failure case that still matters because later hallucination incidents are usually interpreted against the standard it helped set.
What the Source Says
Reuters reported that U.S. District Judge P. Kevin Castel sanctioned lawyers Steven Schwartz and Peter LoDuca and their firm after a filing in the Avianca case relied on six fictitious authorities generated by ChatGPT. The judge imposed a $5,000 penalty and found that the lawyers acted in bad faith through "acts of conscious avoidance" and false or misleading statements after the errors were challenged. The ruling also made the longer-term principle explicit: the court said there is nothing inherently improper about using a reliable AI tool for assistance, but existing rules still impose a gatekeeping duty on attorneys to ensure accuracy before filing.