Summary
Bloomberg Law reported that the Advisory Committee on Evidence Rules decided on May 7, 2026 to table its proposed AI-related evidence amendments until the fall after members failed to reach consensus on whether the federal rules should move quickly against machine-generated expert-like evidence and deepfakes. For lawyers, the practical result is that the federal system is still treating these problems as urgent but unresolved, leaving advocates to work under existing authenticity and expert-evidence doctrine rather than a fresh AI-specific rule.
Why It Matters
This is a strong direct legal workflow story because it shows how the profession is trying to operationalize AI evidence without overcommitting to a brittle rule too early:
- trial lawyers still need to build ordinary foundations for AI-assisted or AI-generated evidence instead of expecting a near-term shortcut
- litigators challenging suspicious media still need fact-specific authenticity attacks, not just a label that something is a deepfake
- lawyers offering machine-generated analytics should expect scrutiny around validation, transparency, and human explanation
- courts are actively debating whether machine outputs can substitute for expert testimony at all
The most useful signal is not simply that "AI evidence is controversial," but that the judiciary is struggling to define where ordinary evidence rules end and AI-specific gatekeeping should begin.
PI Tool Angle
`n/a`
What the Source Says
Bloomberg Law reported that, after extended debate, the committee chose to table the AI proposals until its next fall meeting so it could hear more from technologists and litigators. The report says one proposal would have required Rule 702-style reliability scrutiny when machine-generated output functioned like expert opinion, while another would have shifted the burden to proponents of evidence challenged as AI-fabricated to prove authenticity. The official May 7, 2026 agenda book released by the U.S. Courts shows how detailed the contemplated disclosure burden had become: comments discussed identifying the system and version used, the data inputs and settings, the precise output offered, testing and error-rate information, and whether the output could be reproduced or audited.