The risk of unverified information from generative AI is compelling news organizations to establish formal ethics policies. These new rules often forbid publishing AI-created content unless the story is about AI itself, mandate disclosure of its use, and reinforce rigorous human oversight and fact-checking.

Related Insights

Generative AI is predictive and imperfect, unable to self-correct. A 'guardian agent'—a separate AI system—is required to monitor, score, and rewrite content produced by other AIs to enforce brand, style, and compliance standards, creating a necessary system of checks and balances.

To maintain quality, 6AM City's AI newsletters don't generate content from scratch. Instead, they use "extractive generative" AI to summarize information from existing, verified sources. This minimizes the risk of AI "hallucinations" and factual errors, which are common when AI is asked to expand upon a topic or create net-new content.

When creating AI governance, differentiate based on risk. High-risk actions, like uploading sensitive company data into a public model, require rigid, enforceable "policies." Lower-risk, judgment-based areas, like when to disclose AI use in an email, are better suited for flexible "guidelines" that allow for autonomy.

Beyond data privacy, a key ethical responsibility for marketers using AI is ensuring content integrity. This means using platforms that provide a verifiable trail for every asset, check for originality, and offer AI-assisted verification for factual accuracy. This protects the brand, ensures content is original, and builds customer trust.

Journalist Casey Newton uses AI tools not to write his columns, but to fact-check them after they're written. He finds that feeding his completed text into an LLM is a surprisingly effective way to catch factual errors, a significant improvement in model capability over the past year.

AI's unpredictability requires more than just better models. Product teams must work with researchers on training data and specific evaluations for sensitive content. Simultaneously, the UI must clearly differentiate between original and AI-generated content to facilitate effective human oversight.

The New York Times is so consistent in labeling AI-assisted content that users trust that any unlabeled content is human-generated. This strategy demonstrates how the "presence of disclosure makes the absence of disclosure comforting," creating a powerful implicit signal of trustworthiness across an entire platform.

When pressed for sources on factual data, ChatGPT defaults to citing "general knowledge," providing misleading information with unearned confidence. This lack of verifiable sourcing makes it a liability for detail-oriented professions like journalism, requiring more time for correction than it saves in research.

As social media and search results become saturated with low-quality, AI-generated content (dubbed "slop"), users may develop a stronger preference for reliable information. This "sloptimism" suggests the degradation of the online ecosystem could inadvertently drive a rebound in trust for established, human-curated news organizations as a defense against misinformation.

For enterprises, scaling AI content without built-in governance is reckless. Rather than manual policing, guardrails like brand rules, compliance checks, and audit trails must be integrated from the start. The principle is "AI drafts, people approve," ensuring speed without sacrificing safety.