Don't dismiss high-leverage but hard-to-measure interventions like government capacity building. Use "cost-effectiveness thinking": create back-of-the-envelope calculations and estimate success probabilities. This imposes quantitative discipline on qualitative decisions, avoiding the streetlight effect of only focusing on what's easily measured.
Before pursuing moonshots, assess execution fundamentals. A key indicator of readiness is the ability to reliably forecast a launch's impact and then see that impact materialize. If predictions are consistently wrong, the underlying measurement capabilities are not mature enough for bigger risks.
In ROI-focused cultures like financial services, protect innovation by dedicating a formal budget (e.g., 20% of team bandwidth) to experiments. These initiatives are explicitly exempt from the rigorous ROI calculations applied to the rest of the roadmap, which fosters necessary risk-taking.
Treat government programs as experiments. Define success metrics upfront and set a firm deadline. If the program fails to achieve its stated goals by that date, it should be automatically disbanded rather than being given more funding. This enforces accountability.
Prioritize projects that promise significant impact but face minimal resistance. High-friction projects, even if impactful, drain energy on battles rather than building. The sweet spot is in areas most people don't see yet, thus avoiding pre-emptive opposition.
Instead of ad-hoc pilots, structure them to quantify value across three pillars: incremental revenue (e.g., reduced churn), tangible cost savings (e.g., FTE reduction), and opportunity costs (e.g., freed-up productivity). This builds a solid, co-created business case for monetization.
Instead of complex prioritization frameworks like RICE, designers can use a more intuitive model based on Value, Cost, and Risk. This mirrors the mental calculation humans use for everyday decisions, allowing for a more holistic and natural conversation about project trade-offs.
An aid agency's budget is dwarfed by a host country's ministry spending. Therefore, instead of running parallel programs, the most impactful approach is "system strengthening": working directly with local government to integrate evidence and optimize how they allocate their own, much larger, budgets.
A traditional IT investment ROI model misses the true value of AI in pharma. A proper methodology must account for operational efficiencies (e.g., time saved in clinical trials, where each day costs millions) and intangible benefits like improved data quality, competitive advantage, and institutional learning.
When a public health intervention successfully prevents a crisis, the lack of a negative outcome makes the initial action seem like an unnecessary overreaction. This paradox makes it difficult to justify and maintain funding for preventative measures whose success is invisible.
For any development problem, a program should either be based on strong existing evidence ("use it") or, if such evidence is absent, be designed as an experiment to generate new findings ("produce it"). This simple mantra avoids redundant research and ensures all spending either helps or learns.