Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Dr. Wachter warns that public perception will unfairly judge AI errors against an impossible standard of perfection, not against the flawed human alternative. A single AI mistake will be magnified, overshadowing its superior overall safety record and risking a backlash that stalls progress in healthcare.

Related Insights

When deploying AI tools, especially in sales, users exhibit no patience for mistakes. While a human making an error receives coaching and a second chance, an AI's single failure can cause users to abandon the tool permanently due to a complete loss of trust.

When discussing AI risks like hallucinations, former Chief Justice McCormack argues the proper comparison isn't a perfect system, but the existing human one. Humans get tired, biased, and make mistakes. The question isn't whether AI is flawless, but whether it's an improvement over the error-prone reality.

The benchmark for AI performance shouldn't be perfection, but the existing human alternative. In many contexts, like medical reporting or driving, imperfect AI can still be vastly superior to error-prone humans. The choice is often between a flawed AI and an even more flawed human system, or no system at all.

A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.

Once AI surpasses human capability in critical domains, social and competitive pressures will frame human involvement as a dangerous liability. A hospital using a human surgeon over a superior AI will be seen as irresponsible, accelerating human removal from all important decision loops.

OpenAI's Chairman advises against waiting for perfect AI. Instead, companies should treat AI like human staff—fallible but manageable. The key is implementing robust technical and procedural controls to detect and remediate inevitable errors, turning an unsolvable "science problem" into a solvable "engineering problem."

Society holds AI in healthcare to a much higher standard than human practitioners, similar to the scrutiny faced by driverless cars. We demand AI be 10x better, not just marginally better, which slows adoption. This means AI will first roll out in controlled use cases or as a human-assisting tool, not for full autonomy.

The benchmark for AI reliability isn't 100% perfection. It's simply being better than the inconsistent, error-prone humans it augments. Since human error is the root cause of most critical failures (like cyber breaches), this is an achievable and highly valuable standard.

Both humans and AI make mistakes. Instead of claiming AI is perfect, a more effective argument in regulated fields is that AI makes fewer mistakes and helps humans catch their own errors more quickly. This shifts the focus from perfection to improved safety and efficiency.

Dr. Jordan Schlain frames AI in healthcare as fundamentally different from typical tech development. The guiding principle must shift from Silicon Valley's "move fast and break things" to "move fast and not harm people." This is because healthcare is a "land of small errors and big consequences," requiring robust failure plans and accountability.

AI Failures Will Be Judged Against Perfection, Not The Flawed Human Alternative | RiffOn