We scan new podcasts and send you the top 5 insights daily.
Don't dismiss a model because its output is a wide, uncertain distribution. This is often the correct answer, as it accurately reflects the state of knowledge and prevents acting on a false sense of certainty from intuition. The model's value is in defining the bounds of what's possible.
An AI that confidently provides wrong answers erodes user trust more than one that admits uncertainty. Designing for "humility" by showing confidence indicators, citing sources, or even refusing to answer is a superior strategy for building long-term user confidence and managing hallucinations.
It's a fool's errand to predict specific trial results. A robust quantitative approach to biotech focuses on underlying drivers and base rates. It positions a portfolio so the random, unpredictable nature of trial events plays out favorably over time, guided by factors like valuation and specialist ownership.
A complex spreadsheet model is often brittle; a single questionable assumption can cause stakeholders to reject the entire analysis. To counter this, models should make key assumptions transparent and easily adjustable, like with a slider, to allow for sensitivity analysis rather than outright dismissal.
The researchers' failure case analysis is highlighted as a key contribution. Understanding why the model fails—due to ambiguous data or unusual inputs—provides a realistic scope of application and a clear roadmap for improvement, which is more useful for practitioners than high scores alone.
A key risk for AI in healthcare is its tendency to present information with unwarranted certainty, like an "overconfident intern who doesn't know what they don't know." To be safe, these systems must display "calibrated uncertainty," show their sources, and have clear accountability frameworks for when they are inevitably wrong.
When selecting foundational models, engineering teams often prioritize "taste" and predictable failure patterns over raw performance. A model that fails slightly more often but in a consistent, understandable way is more valuable and easier to build robust systems around than a top-performer with erratic, hard-to-debug errors.
Critics claim explicit models for big decisions are flawed. However, relying on intuition is just using an opaque, implicit model you can't scrutinize. An explicit model, even if imperfect, makes assumptions transparent and challengeable, which is superior to a 'gut feeling' that cannot be dissected or debated.
In an experiment testing AI-generated hypotheses for macular degeneration, the hypothesis that succeeded in lab tests was not the one ranked highest by ophthalmologists. This suggests expert intuition is an unreliable predictor of success compared to systematic, AI-driven exploration and verification.
Moving from science to investing requires a critical mindset shift. Science seeks objective, repeatable truths, while investing involves making judgments about an unknowable future. Successful investors must use quantitative models as guides for judgment, not as sources of definitive answers.
It's tempting to think you can intuit the few factors a decision hinges on. This is often wrong. Complex systems have non-obvious leverage points. The process of building an explicit model reveals which variables have the most impact—a discovery you can't reliably make with intuition alone.