Society holds AI in healthcare to a much higher standard than human practitioners, similar to the scrutiny faced by driverless cars. We demand AI be 10x better, not just marginally better, which slows adoption. This means AI will first roll out in controlled use cases or as a human-assisting tool, not for full autonomy.

Related Insights

Consumers can easily re-prompt a chatbot, but enterprises cannot afford mistakes like shutting down the wrong server. This high-stakes environment means AI agents won't be given autonomy for critical tasks until they can guarantee near-perfect precision and accuracy, creating a major barrier to adoption.

Avoid deploying AI directly into a fully autonomous role for critical applications. Instead, begin with a human-in-the-loop, advisory function. Only after the system has proven its reliability in a real-world environment should its autonomy be gradually increased, moving from supervised to unsupervised operation.

Hospitals are adopting a phased approach to AI. They start with commercially ready, low-risk, non-clinical applications like RCM. This allows them to build an internal 'AI muscle'—developing frameworks and expertise—before expanding into more sensitive, higher-stakes areas like patient engagement and clinical decision support.

Despite hype in areas like self-driving cars and medical diagnosis, AI has not replaced expert human judgment. Its most successful application is as a powerful assistant that augments human experts, who still make the final, critical decisions. This is a key distinction for scoping AI products.

An effective AI strategy in healthcare is not limited to consumer-facing assistants. A critical focus is building tools to augment the clinicians themselves. An AI 'assistant' for doctors to surface information and guide decisions scales expertise and improves care quality from the inside out.

A technology like Waymo's self-driving cars could be statistically safer than human drivers yet still be rejected by the public. Society is unwilling to accept thousands of deaths directly caused by a single corporate algorithm, even if it represents a net improvement over the chaotic, decentralized risk of human drivers.

The evolution of Tesla's Full Self-Driving offers a clear parallel for enterprise AI adoption. Initially, human oversight and frequent "disengagements" (interventions) will be necessary. As AI agents learn, the rate of disengagement will drop, signaling a shift from a co-pilot tool to a fully autonomous worker in specific professional domains.

Despite rapid software advances like deep learning, the deployment of self-driving cars was a 20-year process because it had to integrate with the mature automotive industry's supply chains, infrastructure, and business models. This serves as a reminder that AI's real-world impact is often constrained by the readiness of the sectors it aims to disrupt.

The public holds new technologies to a much higher safety standard than human performance. Waymo could deploy cars that are statistically safer than human drivers, but society would not accept them killing tens of thousands of people annually, even if it's an improvement. This demonstrates the need for near-perfection in high-stakes tech launches.

Dr. Jordan Schlain frames AI in healthcare as fundamentally different from typical tech development. The guiding principle must shift from Silicon Valley's "move fast and break things" to "move fast and not harm people." This is because healthcare is a "land of small errors and big consequences," requiring robust failure plans and accountability.

Healthcare AI Adoption Mirrors Driverless Cars, Facing a 10x Higher Safety Standard | RiffOn