Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Unlike weak-link problems (e.g., food safety) where you fix the worst part, science is a strong-link problem where progress depends entirely on the best outcomes. The optimal strategy is therefore to increase variance by funding more weird, high-risk ideas.

Related Insights

Contrary to conventional wisdom, pursuing massive, hard-to-solve ideas makes it easier to attract capital and top talent. Investors prefer the binary risk-reward of huge outcomes, and the best employees want to work on world-changing problems, not incremental improvements like a new calendar app.

An innovation arm's performance isn't its "batting average." If a team pursues truly ambitious, "exotic" opportunities, a high failure rate is an expected and even positive signal. An overly high success rate suggests the team is only taking safe, incremental bets, defeating its purpose.

The AI safety community acknowledges it lacks all the ideas needed to ensure a safe transition to AGI. This creates an imperative to fund 'neglected approaches'—unconventional, creative, and sometimes 'weird' research that falls outside the current mainstream paradigms but may hold the key to novel solutions.

Top-down mandates from authorities have a history of being flawed, from the food pyramid to the FDA's stance on opioids. True progress emerges not from command-and-control edicts but from a decentralized system that allows for thousands of experiments. Protecting the freedom for most to fail is what allows a few breakthrough ideas to succeed and benefit everyone.

Wet lab experiments are slow and expensive, forcing scientists to pursue safer, incremental hypotheses. AI models can computationally test riskier, 'home run' ideas before committing lab resources. This de-risking makes scientists less hesitant to explore breakthrough concepts that could accelerate the field.

Scientists constrained by limited grant funding often avoid risky but groundbreaking hypotheses. AI can change this by computationally generating and testing high-risk ideas, de-risking them enough for scientists to confidently pursue ambitious "home runs" that could transform their fields.

In venture capital, the potential return from a single massive winner (1000x) is so asymmetric that it dwarfs the cost of multiple failures (1x loss). This reality dictates that the primary focus should be on identifying and capturing huge winners, making the failure to invest in one a far greater error than investing in a company that goes to zero.

A successful early-stage strategy involves actively maximizing specific risks—product, market, and timing—to pursue transformative ideas. Conversely, risks related to capital efficiency and team quality should be minimized. This framework pushes a firm to take big, non-obvious swings instead of settling for safer, incremental bets.

Government funders like the NIH are inherently risk-averse. The ideal model is for philanthropists to provide initial capital for high-risk, transformative studies. Once a concept is proven and "de-risked," government bodies can then fund the larger-scale, long-term research.

Professionalizing science creates competent specialists but stifles genius. It enforces a narrow, risk-averse culture that raises average quality (the floor) but prevents the polymathic, weird explorations that lead to breakthroughs (the ceiling).