Summary
California's State Bar opened public comment on six AI-related amendments to the Rules of Professional Conduct that would convert earlier practical guidance into rule-level obligations. The package matters because it does not stop at hallucinated citations. It extends AI-specific clarification into client communication, confidentiality, tribunal candor, managerial governance, and supervision of nonlawyer assistants.
Why It Matters
This is a strong direct legal workflow story because it shows a state bar moving from general AI caution to enforceable process expectations:
- lawyers would have to independently review, verify, and apply professional judgment to AI output used in representation
- client disclosure duties would turn on whether AI materially affects the scope, cost, manner, or decision-making of the representation
- confidentiality analysis would explicitly cover exposing client information to AI systems where a material risk of retention or misuse exists
- supervisory lawyers would be expected to implement real AI policies rather than relying on informal norms
The operational signal is that AI governance is being framed as ordinary professional responsibility, not a side policy for legal-tech enthusiasts.
Investigator Workflow
The clearest investigator task here is lawyer-supervised investigative support: outside investigators, paralegals, and other nonlawyer assistants who use AI for record triage, public-record research, chronology drafting, or open-source lead development would fall inside the lawyer's supervision burden. The workflow maturity is `simple workflow` because the source is about governance and review rules, not a specific tool build. The connection is partly source-stated: the proposed comment to Rule 5.3 expressly names investigators among nonlawyer assistants, while the practical examples of AI-assisted research and chronology work are careful internal inferences from that rule structure.
What the Source Says
The State Bar's public-comment page says COPRAC approved proposed amendments to rules 1.1, 1.4, 1.6, 3.3, 5.1, and 5.3 at its March 13, 2026 meeting. The proposal would add language clarifying that lawyers must independently review and verify AI-generated output, must communicate with clients when AI materially affects representation, and may "reveal" confidential information by exposing it to technological systems where there is a material risk of inconsistent access or use. The proposed Rule 3.3 comment would explicitly require lawyers to verify that cited authorities exist and are not fabricated, misstated, or taken out of context before filing. The linked redline also adds AI governance to managerial-lawyer duties and says nonlawyer assistants, including investigators, must receive appropriate supervision concerning technology use.