Summary
Nieman Lab reported that two 2024 Pulitzer-winning journalism projects explicitly disclosed using AI or machine learning in their reporting workflows. It remains a valuable legacy reference because it anchors AI-in-journalism discussion in award-winning investigative work rather than generic chatbot speculation.
Why It Matters
The story shows two durable reporting patterns:
- machine learning can surface patterns in very large records collections that would otherwise be impractical to review manually
- visual or object-detection models can help investigators scan large image corpora for specific traces, then hand the flagged material back to reporters for verification
It also matters institutionally because it shows disclosure norms hardening at the top of the profession rather than staying informal.
PI Tool Angle
This points to an advanced private-investigator workflow: use a custom classifier or object-detection system to narrow huge records or image sets into a smaller pool for human review. The source does not frame the work as private investigation, but the transfer path to complaint-file review, surveillance-image triage, or misconduct-pattern detection is clear.
What the Source Says
Nieman reported that Pulitzer administrators explicitly asked entrants about AI use and that two winners disclosed it. City Bureau and the Invisible Institute used a custom machine-learning tool called Judy to parse Chicago police misconduct narratives, ultimately surfacing 54 allegations tied to missing-person investigations. The New York Times visual investigations team trained an object-detection model to find crater patterns linked to 2,000-pound bombs in satellite imagery and then manually filtered false positives, eventually identifying more than 200 matching craters in southern Gaza by November 17, 2023.