Summary
VTDigger reported that Vermont lawyers are increasingly experimenting with AI even though the state has not imposed AI-specific practice rules. The story is anchored by a concrete filing failure: a Vermont lawyer used AI in a motion, checked the cited cases, but missed fabricated quotations that a judge later flagged. The piece is useful because it shows how a jurisdiction can tolerate AI use under existing professional-conduct rules while still leaving lawyers exposed to supervision, confidentiality, and quality-control failures.
Why It Matters
For lawyers, this is a direct operations story about:
- cite-checking and quote verification in filings
- confidentiality risks when prompting public AI systems
- bar-discipline and complaint exposure even without AI-specific rules
- the gap between broad permission to use AI and the actual controls needed to do so safely
It is especially useful as an easy-to-comprehend example of how ordinary practice settings, not only large national firms, are dealing with AI adoption.
What the Source Says
VTDigger reported that attorney Lamar Enzor used AI to help draft a motion, only to have Judge Jennifer Barrett identify five mistakes, including quotations that did not exist in the cited cases. The story also reported that Vermont Judiciary officials decided not to impose AI-specific restrictions after a committee review, instead relying on existing rules of professional conduct. The article further quoted Vermont legal leaders describing AI as a major efficiency gain while warning that lawyers still have to verify outputs and protect client confidentiality.