Summary

In a January 6, 2026 chapter 13 calendar and tentative ruling, Bankruptcy Judge Neil W. Bason proposed a due-diligence order after papers in a case appeared to have been generated either by AI or by an unqualified preparer. The document matters because it shows a court operationalizing resistance to AI misuse in real time: not by issuing a generic warning, but by forcing parties to explain who drafted the papers, what authorities support them, and whether the filing process itself can be trusted.

Why It Matters

For lawyers, this is a direct workflow signal about where AI governance is moving inside courts:

  • judges may demand process disclosures and live explanations when filings look machine-generated or otherwise unreliable
  • filing quality problems can quickly become competence, supervision, and candor issues
  • lawyers and legal support teams need defensible review steps before anything reaches the docket
  • courts may treat AI misuse as part of a broader unauthorized-practice or defective-preparation problem, not just a citation error problem

What the Source Says

Judge Bason wrote that the papers before him appeared to have been created using AI or by an unqualified person and warned that such papers can look legally polished while still being substantively wrong. He proposed an order requiring in-person appearances, identification of who prepared the filings, and submission of authorities supporting the debtor's legal position. That combination makes the item useful because it ties AI suspicion to concrete courtroom controls rather than abstract policy language.