Dr. Jordan Schlain frames AI in healthcare as fundamentally different from typical tech development. The guiding principle must shift from Silicon Valley's "move fast and break things" to "move fast and not harm people." This is because healthcare is a "land of small errors and big consequences," requiring robust failure plans and accountability.

Related Insights

Leaders must resist the temptation to deploy the most powerful AI model simply for a competitive edge. The primary strategic question for any AI initiative should be defining the necessary level of trustworthiness for its specific task and establishing who is accountable if it fails, before deployment begins.

The most effective AI strategy focuses on 'micro workflows'—small, discrete tasks like summarizing patient data. By optimizing these countless small steps, AI can make decision-makers 'a hundred-fold more productive,' delivering massive cumulative value without relying on a single, high-risk autonomous solution.

Hospitals are adopting a phased approach to AI. They start with commercially ready, low-risk, non-clinical applications like RCM. This allows them to build an internal 'AI muscle'—developing frameworks and expertise—before expanding into more sensitive, higher-stakes areas like patient engagement and clinical decision support.

Software engineering is a prime target for AI because code provides instant feedback (it works or it doesn't). In contrast, fields like medicine have slow, expensive feedback loops (e.g., clinical trials), which throttles the pace of AI-driven iteration and adoption. This heuristic predicts where AI will make the fastest inroads.

An effective AI strategy in healthcare is not limited to consumer-facing assistants. A critical focus is building tools to augment the clinicians themselves. An AI 'assistant' for doctors to surface information and guide decisions scales expertise and improves care quality from the inside out.

A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.

An "AI arms race" is underway where stakeholders apply AI to broken, adversarial processes. The true transformation comes from reinventing these workflows entirely, such as moving to real-time payment adjudication where trust is pre-established, thus eliminating the core conflict that AI is currently used to fight over.

Chronic disease patients face a cascade of interconnected problems: pre-authorizations, pharmacy stockouts, and incomprehensible insurance rules. AI's potential lies in acting as an intelligent agent to navigate this complex, fragmented system on behalf of the patient, reducing waste and improving outcomes.

Unlike the top-down, regulated rollout of EHRs, the rapid uptake of AI in healthcare is an organic, bottom-up movement. It's driven by frontline workers like pharmacists who face critical staffing shortages and need tools to manage overwhelming workloads, pulling technology in out of necessity.

In high-stakes fields like medtech, the "fail fast" startup mantra is irresponsible. The goal should be to "learn fast" instead—maximizing learning cycles internally through research and simulation to de-risk products before they have real-world consequences for patient safety.