Summary
The Civil Justice Council in England and Wales opened a consultation on whether lawyers should have to declare certain uses of AI in pleadings, witness statements, and expert reports. The paper is notable because it does not frame AI use as inherently forbidden; instead, it tries to draw operational lines around when disclosure, verification, and procedural rules should kick in.
Why It Matters
For lawyers, this is a strong recent benchmark on how AI use is being operationalized and constrained inside litigation workflow:
- it focuses on court documents rather than general office productivity
- it distinguishes administrative uses from substantive content generation
- it considers declarations tied to specific uses of AI
- it treats accuracy, confidentiality, and accountability as workflow questions rather than abstract ethics slogans
This is especially useful for firms and litigators building internal rules about when AI use must be disclosed, how verification should be documented, and which tasks should remain fully human-authored.
What the Source Says
The consultation paper says its purpose is to consider whether rules are needed to govern the use of AI by legal representatives preparing court documents. It proposes that in some circumstances legal representatives should make a declaration about AI use, while also stressing that the goal is to let technology improve efficiency and reduce cost without undermining confidence in the rule of law. The paper also distinguishes administrative uses like spelling, formatting, transcription, and accessibility tools from substantive AI-generated content, and it surveys more restrictive disclosure approaches already used in some U.S. courts.