Teams often ask the wrong first question: "Which AI tool should we use?"

The better first question is: "Is our current workflow reliable without AI?" If the answer is no, adding AI usually makes the same problems happen faster and at larger scale.

1. The lawyer sanctions case: fake citations, real consequences

In Mata v. Avianca (S.D.N.Y.), attorneys submitted a filing with non-existent cases generated by AI. On June 22, 2023, Judge P. Kevin Castel issued sanctions: the attorneys and firm were ordered to pay a $5,000 penalty and notify judges who received the false citations.

This is the most cited legal AI failure for a reason. The main problem was not "using AI." The problem was workflow failure: no source verification before filing to court.

2. Another legal warning: a second sanctions event in 2024

In January 2024, a federal judge in Alabama sanctioned three lawyers after briefs included incorrect AI-generated citations. The Associated Press reported the court required a $5,000 fine and legal education focused on AI risks.

Two different courts, similar outcome: weak review controls, then formal penalties.

3. Outside law: customer and publication errors follow the same pattern

These failures are not limited to courts:

  • Air Canada was held responsible after its chatbot gave a customer incorrect fare policy information, and the customer received compensation through tribunal review.
  • CNET paused its AI-written story experiment after error corrections were identified in multiple articles.

In both cases, the issue was operational. The workflow let unverified AI output reach users at decision points.

4. AI surfaces bad workflows faster

AI tools increase speed. That is the feature and the risk.

If your workflow already has weak source checks, unclear ownership, or no final sign-off, AI will expose those weaknesses quickly. Output volume increases before quality controls catch up. That is why teams experience "sudden" AI problems that are actually old process problems under higher throughput.

5. Applying AI to a bad workflow will not fix it

If a workflow does not define what counts as verified, AI cannot invent that discipline for you. It can only produce more drafts, more summaries, and more confidence language.

NIST's AI Risk Management Framework is clear that governance, mapping context, measurement, and management are required around AI systems. In plain language: process quality comes first, then automation.

6. A practical intermediate workflow you can implement now

Use this sequence before adding AI deeper into research, legal, or investigative operations:

  • Define risk tiers: low-risk drafting vs. high-risk legal or factual assertions.
  • Require source traceability for every high-impact claim.
  • Add a mandatory human verification gate before external submission or publication.
  • Record model/tool usage in the matter file for auditability.
  • Use a disclosure rule where required by local court or policy.

Several courts now require explicit certification for AI-assisted filings, which reinforces this direction: use AI, but prove review discipline.

7. What this means for intermediate teams

At the intermediate level, your goal is not banning AI. Your goal is controlled acceleration.

Keep AI for discovery, drafting, and organization. Keep humans accountable for evidence checks, citations, legal representations, and final decisions. That is the line between speed and liability.

Bottom line

Real-world AI misuse cases are mostly workflow misuse cases. The tool did not remove responsibility; it increased the cost of weak process design.

Fix the workflow first. Then let AI multiply a process that is already defensible.

If you want to audit your current AI workflow before scaling it further, Daniel Powell can walk through your process and identify where the risk points are — before a weak step becomes a liability. Get in touch.

Sources