In high-stakes fields like medtech, the "fail fast" startup mantra is irresponsible. The goal should be to "learn fast" instead—maximizing learning cycles internally through research and simulation to de-risk products before they have real-world consequences for patient safety.

Related Insights

Effective leadership in an innovation-driven company isn't about being 'tough' but 'demanding' of high standards. The Novonesis CEO couples this with an explicit acceptance of failure as an inherent part of R&D, stressing the need to 'fail fast' and learn from it.

For leaders overwhelmed by AI, a practical first step is to apply a lean startup methodology. Mobilize a bright, cross-functional team, encourage rapid, messy iteration without fear, and systematically document failures to enhance what works. This approach prioritizes learning and adaptability over a perfect initial plan.

The default assumption for any 'moonshot' idea is that it is likely wrong. The team's immediate goal is to find the fatal flaw as fast as possible. This counterintuitive approach avoids emotional attachment and speeds up the overall innovation cycle by prioritizing learning over being right.

Don't treat validation as a one-off task before development. The most successful products maintain a constant feedback loop with users to adapt to changing needs, regulations, and tastes. The worst mistake is to stop listening after the initial launch, as businesses that fail to adapt ultimately fail.

For ambitious 'moonshot' projects, the vast majority of time and effort (90%) is spent on learning, exploration, and discovering the right thing to build. The actual construction is a small fraction (10%) of the total work. This reframes failure as a critical and expected part of the learning process.

Unlike software startups that can "fail fast" and pivot cheaply, a single biotech clinical program costs tens of millions. This high cost of failure means the industry values experienced founders who have learned from past mistakes, a direct contrast to Silicon Valley's youth-centric culture.

For frontier technologies like BCIs, a Minimum Viable Product can be self-defeating because a "mid" signal from a hacky prototype is uninformative. Neuralink invests significant polish into experiments, ensuring that if an idea fails, it's because the concept is wrong, not because the execution was poor.

An FDA-style regulatory model would force AI companies to make a quantitative safety case for their models before deployment. This shifts the burden of proof from regulators to creators, creating powerful financial incentives for labs to invest heavily in safety research, much like pharmaceutical companies invest in clinical trials.

To truly learn from go-to-market experiments, you can't be half-hearted. StackAI's philosophy is to dedicate significant, focused effort for 1-3 months on a single idea. This ensures that if it fails, you know it's the idea, not poor execution, providing a definitive learning.

A sophisticated learning culture avoids the generic 'fail fast' mantra by distinguishing four mistake types. 'Stretch' mistakes are good and occur when pushing limits. 'High-stakes' mistakes are bad and must be avoided. 'Sloppy' mistakes reveal system flaws. 'Aha-moment' mistakes provide deep insights. This framework allows for a nuanced, situation-appropriate response to error.