Summary

Reuters reported that the California Senate passed SB 574, a bill that would require lawyers to verify the accuracy of AI-generated material before using it in practice and would also impose parallel guardrails on arbitrators. The story matters because it turns the profession's familiar "don't trust hallucinations" warning into proposed statutory workflow requirements.

Why It Matters

This is a strong direct legal workflow story because it shows one of the clearest attempts to operationalize AI use in practice rather than just warn about it:

  • lawyers would need to personally review AI-assisted work for factual and legal accuracy before using it
  • hallucinated citations and invented propositions would become an explicit compliance target rather than an implied professionalism problem
  • confidential or privileged information would need to be protected before it reaches an AI system
  • arbitrators would be barred from delegating core decisional work to AI tools

The practical signal is that California is treating AI competence as a workflow-and-supervision issue, not just a technology preference.

What the Source Says

Reuters said SB 574 would require lawyers to verify the accuracy of every citation and other material produced with AI before using it in filings or client work. The published bill text adds more operational detail: proposed Business and Professions Code section 6068.1 would require lawyers to review AI output for accuracy, take reasonable steps to prevent disclosure of confidential information, and avoid discriminatory or biased AI use. Proposed Code of Civil Procedure section 128.7 would likewise bar arbitrators from delegating their decision-making function to AI while still permitting administrative or support uses.