When discussing AI risks like hallucinations, former Chief Justice McCormack argues the proper comparison isn't a perfect system, but the existing human one. Humans get tired, biased, and make mistakes. The question isn't whether AI is flawless, but whether it's an improvement over the error-prone reality.

Related Insights

Demis Hassabis likens current AI models to someone blurting out the first thought they have. To combat hallucinations, models must develop a capacity for 'thinking'—pausing to re-evaluate and check their intended output before delivering it. This reflective step is crucial for achieving true reasoning and reliability.

AI is engineered to eliminate errors, which is precisely its limitation. True human creativity stems from our "bugs"—our quirks, emotions, misinterpretations, and mistakes. This ability to be imperfect is what will continue to separate human ingenuity from artificial intelligence.

A key challenge in AI adoption is not technological limitation but human over-reliance. 'Automation bias' occurs when people accept AI outputs without critical evaluation. This failure to scrutinize AI suggestions can lead to significant errors that a human check would have caught, making user training and verification processes essential.

While AI can inherit biases from training data, those datasets can be audited, benchmarked, and corrected. In contrast, uncovering and remedying the complex cognitive biases of a human judge is far more difficult and less systematic, making algorithmic fairness a potentially more solvable problem.

The benchmark for AI performance shouldn't be perfection, but the existing human alternative. In many contexts, like medical reporting or driving, imperfect AI can still be vastly superior to error-prone humans. The choice is often between a flawed AI and an even more flawed human system, or no system at all.

AI's occasional errors ('hallucinations') should be understood as a characteristic of a new, creative type of computer, not a simple flaw. Users must work with it as they would a talented but fallible human: leveraging its creativity while tolerating its occasional incorrectness and using its capacity for self-critique.

Don't blindly trust AI. The correct mental model is to view it as a super-smart intern fresh out of school. It has vast knowledge but no real-world experience, so its work requires constant verification, code reviews, and a human-in-the-loop process to catch errors.

OpenAI's Chairman advises against waiting for perfect AI. Instead, companies should treat AI like human staff—fallible but manageable. The key is implementing robust technical and procedural controls to detect and remediate inevitable errors, turning an unsolvable "science problem" into a solvable "engineering problem."

Citing high rates of appellate court reversals and a 3-5% error rate in criminal convictions revealed by DNA, former Chief Justice McCormack argues the human-led justice system is not as reliable as perceived. This fallibility creates a clear opening for AI to improve accuracy and consistency.

The benchmark for AI reliability isn't 100% perfection. It's simply being better than the inconsistent, error-prone humans it augments. Since human error is the root cause of most critical failures (like cyber breaches), this is an achievable and highly valuable standard.