The Atlantic's CEO Nick Thompson draws a clear line for AI in journalism. He advocates for using it extensively for reporting tasks like finding stories, analyzing data, or checking for chronological gaps. However, since a byline promises human authorship, AI should never write the final prose, even if it becomes a better writer.
While AI tools once gave creators an edge, they now risk producing democratized, undifferentiated output. IBM's AI VP, who grew to 200k followers, now uses AI less. The new edge is spending more time on unique human thinking and using AI only for initial ideation, not final writing.
Generative AI is a powerful tool for accelerating the production and refinement of creative work, but it cannot replace human taste or generate a truly compelling core idea. The most effective use of AI is as a partner to execute a pre-existing, human-driven concept, not as the source of the idea itself.
To maintain quality, 6AM City's AI newsletters don't generate content from scratch. Instead, they use "extractive generative" AI to summarize information from existing, verified sources. This minimizes the risk of AI "hallucinations" and factual errors, which are common when AI is asked to expand upon a topic or create net-new content.
Medium's CEO argues that writing's future is secure because its core function is the process of structured thinking, not just content output. The act of articulating ideas reveals flaws and deepens understanding for the writer—a cognitive benefit that delegating to AI would eliminate.
AI models can provide answers, but they lack innate curiosity. The unique and enduring value of humans, especially in fields like journalism, is their ability to ask insightful questions. This positions human curiosity as the essential driver for AI, rather than a skill that AI will replace.
Journalist Casey Newton uses AI tools not to write his columns, but to fact-check them after they're written. He finds that feeding his completed text into an LLM is a surprisingly effective way to catch factual errors, a significant improvement in model capability over the past year.
The most effective use of AI in content is not generating generic articles. Instead, feed it unique primary sources like expert interview transcripts or customer call recordings. Ask it to extract key highlights and structure a detailed outline, pairing human insight with AI's summarization power.
The risk of unverified information from generative AI is compelling news organizations to establish formal ethics policies. These new rules often forbid publishing AI-created content unless the story is about AI itself, mandate disclosure of its use, and reinforce rigorous human oversight and fact-checking.
AI tools are best used as collaborators for brainstorming or refining ideas. Relying on AI for final output without a "human in the loop" results in obviously robotic content that hurts the brand. A marketer's taste and judgment remain the most critical components.
Effective AI content strategy uses tools to handle first drafts and outlines, accelerating production and ensuring consistency. This frees up humans to perform the crucial roles of editing, shaping perspective, and injecting unique, lived experiences, which AI cannot replicate. The goal is amplification, not automation.