Summary
This Sedona Conference preprint gives one of the clearest recent snapshots of how AI is actually entering the federal court system. The main finding is not runaway automation. It is bounded, uneven adoption centered on legal research and document review, paired with thin training coverage and inconsistent chamber-level rules.
Why It Matters
This is a strong direct legal workflow story because it moves the conversation from abstract court-policy debate to measured operational behavior inside chambers:
- more than half of responding judges reported at least some AI use
- the dominant tasks were legal research and document review rather than drafting filed rulings
- many chambers still lack training or a settled internal policy
- judges are drawing narrower boundaries than many vendor narratives imply
For lawyers, that matters because it changes what kinds of AI-assisted work judges may already be exposed to, what kinds of errors they are worried about, and where courts may next formalize guardrails.
PI Tool Angle
`n/a`
What the Source Says
The survey grouped judicial use into broad categories and found that 31.8% of responding judges reported AI uses that fall under reviewing, searching, or analyzing documents, while 30.0% reported using AI to conduct legal research. The paper also found that 61.1% reported either no AI training from court administration or uncertainty about whether training had been offered. On governance, about one-third of judges either permit or encourage AI use in chambers, while roughly one-quarter reported having no official policy at all.