Summary

Nieman Lab reported that four reporters at Suncoast Searchlight asked their board to investigate the editor-in-chief's undisclosed use of ChatGPT-based editing and shortening workflows after staff found fabricated and altered quotations in drafts. The story is a strong legacy cautionary reference because it turns abstract disclosure debates into a concrete newsroom governance failure involving internal trust, partner-distribution risk, and the absence of any AI policy.

Why It Matters

For journalists, this is directly relevant to:

  • whether AI-assisted edits and rewrites must be disclosed inside a newsroom
  • the risk of hallucinated or altered quotations entering publication workflows
  • the need for public and internal AI policies before managers experiment on live editorial work
  • how newsroom trust erodes when AI use is hidden from colleagues

It is especially valuable because it focuses on the editorial middle zone where AI is not writing full stories but is still altering text in ways that can damage credibility.

What the Source Says

Nieman Lab reported that staff found at least one fabricated quote in a housing story and another hallucinated or altered quote in a shortened draft about school mental-health cuts. The story says editor Emily Le Coz later acknowledged she had experimented with ChatGPT to create shortened versions of stories, and that one such draft was shared with a partner publication without any disclosure of AI involvement. Nieman Lab also reported that board member Kelly McBride of Poynter was copied into the response and that the board said it would adopt AI guidelines because Suncoast had neither a public nor internal policy.