A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.
The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.
The primary problem for AI creators isn't convincing people to trust their product, but stopping them from trusting it too much in areas where it's not yet reliable. This "low trustworthiness, high trust" scenario is a danger zone that can lead to catastrophic failures. The strategic challenge is managing and containing trust, not just building it.
Leaders must resist the temptation to deploy the most powerful AI model simply for a competitive edge. The primary strategic question for any AI initiative should be defining the necessary level of trustworthiness for its specific task and establishing who is accountable if it fails, before deployment begins.
An AI that confidently provides wrong answers erodes user trust more than one that admits uncertainty. Designing for "humility" by showing confidence indicators, citing sources, or even refusing to answer is a superior strategy for building long-term user confidence and managing hallucinations.
To ensure reliability in healthcare, ZocDoc doesn't give LLMs free rein. It wraps them in a hybrid system where traditional, deterministic code orchestrates the AI's tasks, sets firm boundaries, and knows when to hand off to a human, preventing the 'praying for the best' approach common with direct LLM use.
AI finds the most efficient correlation in data, even if it's logically flawed. One system learned to associate rulers in medical images with cancer, not the lesion itself, because doctors often measure suspicious spots. This highlights the profound risk of deploying opaque AI systems in critical fields.
When a lab report screenshot included a dismissive note about "hemolysis," both human doctors and a vision-enabled AI made the same mistake of ignoring a critical data point. This highlights how AI can inherit human biases embedded in data presentation, underscoring the need to test models with varied information formats.
AI's unpredictability requires more than just better models. Product teams must work with researchers on training data and specific evaluations for sensitive content. Simultaneously, the UI must clearly differentiate between original and AI-generated content to facilitate effective human oversight.
For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.
Dr. Jordan Schlain frames AI in healthcare as fundamentally different from typical tech development. The guiding principle must shift from Silicon Valley's "move fast and break things" to "move fast and not harm people." This is because healthcare is a "land of small errors and big consequences," requiring robust failure plans and accountability.