For any development problem, a program should either be based on strong existing evidence ("use it") or, if such evidence is absent, be designed as an experiment to generate new findings ("produce it"). This simple mantra avoids redundant research and ensures all spending either helps or learns.

Related Insights

Establishing causation for a complex societal issue requires more than a single data set. The best approach is to build a "collage of evidence." This involves finding natural experiments—like states that enacted a policy before a national ruling—to test the hypothesis under different conditions and strengthen the causal claim.

Policymakers struggle to apply academic findings because research doesn't specify how to translate evidence into procurement documents. An intermediary is needed to bridge this gap, acting as an in-house consultant to map research to actionable implementation plans for those writing contracts.

In ROI-focused cultures like financial services, protect innovation by dedicating a formal budget (e.g., 20% of team bandwidth) to experiments. These initiatives are explicitly exempt from the rigorous ROI calculations applied to the rest of the roadmap, which fosters necessary risk-taking.

Data's role is to reveal reality and identify problems or opportunities (the "what" and "where"). It cannot prescribe the solution. The creative, inventive process of design is still required to determine "how" to solve the problem effectively.

After an intervention like cash transfers has been validated by over 100 randomized trials, spending more money on another study is unethical. That funding is being taken from potential beneficiaries to measure something already known, preventing more lives from being improved.

An ideal procurement process identifies the most cost-effective known solution but also allows bidders to propose an innovative alternative. This alternative must be accompanied by a rigorous impact evaluation, turning procurement into a mechanism for continuous improvement rather than a static decision.

Don't dismiss high-leverage but hard-to-measure interventions like government capacity building. Use "cost-effectiveness thinking": create back-of-the-envelope calculations and estimate success probabilities. This imposes quantitative discipline on qualitative decisions, avoiding the streetlight effect of only focusing on what's easily measured.

Treat government programs as experiments. Define success metrics upfront and set a firm deadline. If the program fails to achieve its stated goals by that date, it should be automatically disbanded rather than being given more funding. This enforces accountability.

An aid agency's budget is dwarfed by a host country's ministry spending. Therefore, instead of running parallel programs, the most impactful approach is "system strengthening": working directly with local government to integrate evidence and optimize how they allocate their own, much larger, budgets.

Frame philanthropic efforts not just by direct impact but as a "real-world MBA." Prioritize projects where, even if they fail, you acquire valuable skills and relationships. This heuristic, borrowed from for-profit investing, ensures a personal return on investment and sustained engagement regardless of the outcome.