When pressed for sources on factual data, ChatGPT defaults to citing "general knowledge," providing misleading information with unearned confidence. This lack of verifiable sourcing makes it a liability for detail-oriented professions like journalism, requiring more time for correction than it saves in research.
Despite being a language model, ChatGPT's most valuable application in a data journalism experiment was not reporting or summarizing but its ability to generate and debug Python code for a map. This technical capability proved more efficient and reliable than its core content-related functions.
The risk of unverified information from generative AI is compelling news organizations to establish formal ethics policies. These new rules often forbid publishing AI-created content unless the story is about AI itself, mandate disclosure of its use, and reinforce rigorous human oversight and fact-checking.
