Summary
The Victorian Law Reform Commission published a full report on how AI should be used in courts and tribunals, moving beyond generic caution to a concrete operating model. The report covers current and emerging use across pre-hearing, hearing, and post-hearing work, recommends 30 reforms, and pairs a principles-based approach with role-specific guidance for court users, judicial officers, and staff.
Why It Matters
This is a strong direct legal workflow story because it shows what mature court-side AI governance looks like in practice:
- principles instead of one-off bans
- separate guidance for litigants, lawyers, judges, and court staff
- explicit treatment of evidence, privacy, human rights, and accountability
- governance, training, and suitability checks before new AI uses are approved
It is especially useful as a benchmark for lawyers who need to understand how courts may normalize some AI use while sharply limiting other uses.
What the Source Says
The commission says the report was tabled on February 3, 2026 and contains 30 recommendations. It recommends a principles-based regulatory approach, eight principles to guide safe use, guidelines for court users and judicial officers, stronger governance and assurance processes, and formal education and awareness work. The report structure also makes clear that AI use is being considered across pre-hearing, hearing, and post-hearing stages, while separately addressing evidence law, privacy, and guidance that would prohibit AI use for judicial decision-making.