Summary
This NCSC report is one of the clearest recent records of how judges are actually using generative AI inside court work. Its main lesson is not autonomous judging. It is careful, low-risk use for efficiency, communication, and access-to-justice support, paired with strong insistence that judges remain the final decision-makers.
Why It Matters
For lawyers, this is a direct operational story about the environment their filings and arguments now enter:
- the report is based on 13 confidential interviews with state and federal judges across 10 states
- judges described using GenAI for repetitive or administrative tasks, clearer communication, and exploratory access-to-justice tools
- the report says early adopters still see hallucinations, privacy, public-perception, and deskilling risks as central constraints
- the strongest norm in the report is that judges can use AI to support work, but not to surrender judgment
That matters for briefing strategy, disclosure expectations, and future court policy because it shows where judicial experimentation is happening before many formal rules are settled.
PI Tool Angle
`n/a`
What the Source Says
The report says 13 one-hour interviews were conducted in October and November 2025 with state and federal judges in 10 states. It highlights three core findings: judges must remain the deciders; early adopters are using GenAI to save time and improve access to justice; and early adopters are actively trying to mitigate known risks. The examples of value include low-risk administrative work, summaries and readability improvements, and court-service interfaces for self-represented litigants. The leading risks named are hallucinations, privacy and cybersecurity, negative public perception, deskilling, and heavier filing volume from self-represented litigants using GenAI.