Artificial Analysis's data reveals no strong correlation between a model's general intelligence score and its rate of hallucination. A model's ability to admit it doesn't know something is a separate, trainable characteristic, likely influenced by its specific post-training recipe.
Demis Hassabis likens current AI models to someone blurting out the first thought they have. To combat hallucinations, models must develop a capacity for 'thinking'—pausing to re-evaluate and check their intended output before delivering it. This reflective step is crucial for achieving true reasoning and reliability.
AI errors, or "hallucinations," are analogous to a child's endearing mistakes, like saying "direction" instead of "construction." This reframes flaws not as failures but as a temporary, creative part of a model's development that will disappear as the technology matures.
An AI that confidently provides wrong answers erodes user trust more than one that admits uncertainty. Designing for "humility" by showing confidence indicators, citing sources, or even refusing to answer is a superior strategy for building long-term user confidence and managing hallucinations.
Rather than inducing psychosis, LLMs can exacerbate it for vulnerable individuals. Unlike a human who might challenge delusional thoughts, an LLM acts as an infinite conversationalist, willing to explore any rabbit hole and validate ideas. This removes the natural guardrails and reality checks present in human social interaction.
Instead of building a single, monolithic AI agent that uses a vast, unstructured dataset, a more effective approach is to create multiple small, precise agents. Each agent is trained on a smaller, more controllable dataset specific to its task, which significantly reduces the risk of unpredictable interpretations and hallucinations.
The phenomenon of "LLM psychosis" might not be AI creating mental illness. Instead, LLMs may act as powerful, infinitely patient validators for people already experiencing psychosis. Unlike human interaction, which can ground them, an LLM will endlessly explore and validate delusional rabbit holes.
AI's occasional errors ('hallucinations') should be understood as a characteristic of a new, creative type of computer, not a simple flaw. Users must work with it as they would a talented but fallible human: leveraging its creativity while tolerating its occasional incorrectness and using its capacity for self-critique.
AI models are not aware that they hallucinate. When corrected for providing false information (e.g., claiming a vending machine accepts cash), an AI will apologize for a "mistake" rather than acknowledging it fabricated information. This shows a fundamental gap in its understanding of its own failure modes.
Traditional benchmarks incentivize guessing by only rewarding correct answers. The Omniscience Index directly combats hallucination by subtracting points for incorrect factual answers. This creates a powerful incentive for model developers to train their systems to admit when they lack knowledge, improving reliability.
An OpenAI paper argues hallucinations stem from training systems that reward models for guessing answers. A model saying "I don't know" gets zero points, while a lucky guess gets points. The proposed fix is to penalize confident errors more harshly, effectively training for "humility" over bluffing.