Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

A complex spreadsheet model is often brittle; a single questionable assumption can cause stakeholders to reject the entire analysis. To counter this, models should make key assumptions transparent and easily adjustable, like with a slider, to allow for sensitivity analysis rather than outright dismissal.

Related Insights

Don't dismiss a model because its output is a wide, uncertain distribution. This is often the correct answer, as it accurately reflects the state of knowledge and prevents acting on a false sense of certainty from intuition. The model's value is in defining the bounds of what's possible.

Analytical leaders often try to create one all-encompassing model for every scenario, resulting in a complex monstrosity. A better approach is a simple model for most cases, handling exceptions as one-offs. This avoids wasting months on a framework to solve a six-minute problem.

For data-heavy queries like financial projections, AI responses should transcend static text. The ideal output is an interactive visualization, such as a chart or graph, that the user can directly manipulate. This empowers them to explore scenarios and gain a deeper understanding of the data.

When modeling a complex issue like malaria bed nets, don't start with every variable. Begin with a simple model of the 5-6 core drivers. This makes the model easier to understand, hold in your head, and debug. Add complexity later, once the basic dynamics are established and validated.

Critics claim explicit models for big decisions are flawed. However, relying on intuition is just using an opaque, implicit model you can't scrutinize. An explicit model, even if imperfect, makes assumptions transparent and challengeable, which is superior to a 'gut feeling' that cannot be dissected or debated.

To maintain objectivity in acquisitions, Bending Spoons separates assumption-setting from model output. The team rigorously debates and locks in all inputs without seeing the projected P&L or IRR. This prevents the common bias of tweaking assumptions to justify a desired outcome. The final model output is then treated as unchangeable.

For AI systems to be adopted in scientific labs, they must be interpretable. Researchers need to understand the 'why' behind an AI's experimental plan to validate and trust the process, making interpretability a more critical feature than raw predictive power.

It's tempting to think you can intuit the few factors a decision hinges on. This is often wrong. Complex systems have non-obvious leverage points. The process of building an explicit model reveals which variables have the most impact—a discovery you can't reliably make with intuition alone.

Instead of giving a single point estimate, provide a forecast with a lower and upper bound. This approach communicates both what you know and what you don't. It reduces the risk of being perceived as "wrong" and invites others to share information that can help narrow the range.

Incumbent FP&A software like Anaplan solves data integration pains but introduces a fatal flaw: extreme rigidity. After a lengthy implementation, changing the business model becomes nearly impossible, a task that takes an hour in a spreadsheet but can take months with these tools.

Complex Models Fail When Users Cannot Challenge Their Brittle Assumptions | RiffOn