To combat AI hallucinations and fabricated statistics, users must explicitly instruct the model in their prompt. The key is to request 'verified answers that are 100% not inferred and provide exact source,' as generative AI models infer information by default.
A significant portion (30-50%) of statistics, news, and niche details from ChatGPT are inferred and not factually accurate. Users must be aware that even official-sounding stats can be completely fabricated, risking credibility in professional work like presentations.
To maintain quality, 6AM City's AI newsletters don't generate content from scratch. Instead, they use "extractive generative" AI to summarize information from existing, verified sources. This minimizes the risk of AI "hallucinations" and factual errors, which are common when AI is asked to expand upon a topic or create net-new content.
A powerful workflow is to explicitly instruct your AI to act as a collaborative thinking partner—asking questions and organizing thoughts—while strictly forbidding it from creating final artifacts. This separates the crucial thinking phase from the generative phase, leading to better outcomes.
Effective GPT instructions go beyond defining a role and goal. A critical component is the "anti-prompt," which sets hard boundaries and constraints (e.g., "no unproven supplements," "don't push past recovery metrics") to ensure safe and relevant outputs.
When using LLMs to analyze unstructured data like interview transcripts, they often hallucinate compelling but non-existent quotes. To maintain integrity, always include a specific prompt instruction like "use quotes and cite your sources from the transcript for each quote." This forces the AI to ground its analysis in actual data.
Anthropic suggests that LLMs, trained on text about AI, respond to field-specific terms. Using phrases like 'Think step by step' or 'Critique your own response' acts as a cheat code, activating more sophisticated, accurate, and self-correcting operational modes in the model.
Getting a useful result from AI is a dialogue, not a single command. An initial prompt often yields an unusable output. Success requires analyzing the failure and providing a more specific, refined prompt, much like giving an employee clearer instructions to get the desired outcome.
When a prompt yields poor results, use a meta-prompting technique. Feed the failing prompt back to the AI, describe the incorrect output, specify the desired outcome, and explicitly grant it permission to rewrite, add, or delete. The AI will then debug and improve its own instructions.
Generative AI models often have a built-in tendency to be overly complimentary and positive. Be aware of this bias when seeking feedback on ideas. Explicitly instruct the AI to be more critical, objective, or even brutal in its analysis to avoid being misled by unearned praise and get more valuable insights.
AI can provide outdated information. Instead of stating its output as fact ("You are an ESOP"), frame it as a question ("My research suggested you were an ESOP, is that still the case?"). This validates information and turns a potential error into a natural, informed conversation starter.