Summary

The New Jersey courts issued a March 30, 2026 notice urging attorneys and law firms to adopt written AI policies and attached a starter template that covers confidentiality, human review, citation checking, tool settings, transcription, and client communications. The key operational point is that the judiciary is no longer talking about AI in the abstract; it is spelling out the minimum workflow controls a firm should have if it wants to use AI in day-to-day practice.

Why It Matters

For lawyers, this is a direct operations story about how AI gets normalized inside practice:

  • legal research, drafting, document review, and case management are named as routine AI touchpoints
  • firms are expected to understand hidden or embedded AI features inside ordinary software
  • citation verification and factual checking are framed as mandatory lawyer work, not optional cleanup
  • confidentiality, supervision, and training are treated as policy design problems, not one-off ethics questions

It is especially useful because it converts general competence talk into a concrete checklist firms can adopt or audit against.

PI Tool Angle

`n/a`

What the Source Says

The notice says AI tools are already being integrated into legal research, drafting, document review, and case management, and warns that some embedded AI features may not be obvious to users. It encourages firms to adopt and periodically update internal policies as part of maintaining professional competence and ethical compliance. The attached starter policy requires human review of all AI-assisted work, verification of case citations and other legal authorities against official sources, review of privacy and sharing settings before using tools, and extra caution around AI transcription products. It also states that a written policy is not a safe harbor by itself; implementation, training, supervision, and verification are what reduce risk.