Summary
This Reuters Institute chapter remains a durable reference point because it measures not just whether audiences fear AI in the newsroom, but which uses they will accept, which they resist, and why. The main pattern is stable and operationally useful: audiences are relatively more open to AI for behind-the-scenes assistance and delivery improvements, and far less comfortable with AI generating core content, especially realistic images and video.
Why It Matters
For journalists and newsroom leaders, the story matters because it turns "AI trust" into concrete product and disclosure decisions:
- backend tasks such as transcription and workflow support are easier to defend publicly
- audiences are far more wary when AI generates public-facing content
- serious topics such as politics attract much lower tolerance than lighter subjects such as sports
- human oversight and clear disclosure remain central to audience acceptance
This makes the piece a useful legacy benchmark for judging whether a proposed newsroom AI use case is merely technically possible or actually socially survivable.
PI Tool Angle
`n/a`
What the Source Says
The chapter says that across 28 markets, only 36% of respondents felt comfortable using news made by humans with the help of AI, and only 19% felt comfortable using news made mostly by AI with human oversight. It reports that audiences are most comfortable with behind-the-scenes uses and improvements to delivery or accessibility, and least comfortable with AI generating entirely new content. It also finds especially strong opposition to realistic-looking AI images and video, stronger skepticism around politically consequential topics, and broad agreement that humans should remain in the loop. The qualitative work cited in the chapter involved 45 digital news users in Mexico, the UK, and the US.