Summary

Researchers at McGill's Centre for Media, Technology and Democracy published a March 2026 audit showing that major AI systems can reproduce the value of Canadian reporting while rarely crediting the news organizations that produced it. The memo and technical brief argue that AI companies are not just training on journalism, but increasingly redistributing it as a substitute product with weak default attribution.

Why It Matters

This is a strong direct journalism story because it shifts the AI-news debate from broad fear to measurable operating questions:

  • when AI answers substitute for visiting the original article
  • whether links are enough if model responses do not name the reporting outlet
  • how paywalled reporting can still be economically displaced
  • which attribution behaviors are technical choices rather than unavoidable limitations

For journalists and publishers, it is operationally useful because it frames AI as an ingestion, production, and distribution problem, not just a licensing or plagiarism dispute.

PI Tool Angle

`n/a`

What the Source Says

The memo says the team tested four major AI models on 2,267 Canadian news stories in English and French across 18,134 queries to see what the models had absorbed from training data and whether they attributed it. It also tested 140 specific recent articles from seven Canadian outlets across 3,360 web-enabled conditions to see whether the models produced substitutes for current reporting and whether they credited the source. The memo reports that, without web search, the models provided no source attribution 82% of the time. With web access enabled, the same models covered enough of the original reporting to substitute for the source in 54 to 81% of cases, linked to Canadian news sites in 29 to 69% of responses, and named the originating outlet in the response text in only 1 to 16% of cases. When explicitly asked for citations, attribution rates rose to 74 to 97%.