We scan new podcasts and send you the top 5 insights daily.
To combat the bias of wanting to continue a program even when results are disappointing, Karen Levy advocates for "pre-policy plans." This involves getting all stakeholders (e.g., government, researchers) to agree in advance on the specific actions they will take based on different potential study outcomes, ensuring evidence-based decisions are made.
Effective review boards don't just say yes or no. They ask, "What is the next experiment needed to secure the next round of funding?" This approach relies on micro-budgeting for specific tests and regularly rotating board members to prevent political capture and groupthink.
Before a major initiative, run a simple thought experiment: what are the best and worst possible news headlines? If the worst-case headline is indefensible from a process, intent, or PR perspective, the risk may be too high. This forces teams to confront potential negative outcomes early.
A pre-mortem asks a team to imagine their project has already failed spectacularly. By explaining the hypothetical failure, they uncover potential risks and can build mitigation strategies, effectively using the power of hindsight bias in advance.
To avoid stakeholders undermining research results later ('you only talked to 38 people'), proactively collaborate with them before the study to define the minimum standard of rigor they will accept. This alignment shifts the conversation from a post-mortem critique to a pre-launch agreement, disarming future objections.
The most valuable lessons in clinical trial design come from understanding what went wrong. By analyzing the protocols of failed studies, researchers can identify hidden biases, flawed methodologies, and uncontrolled variables, learning precisely what to avoid in their own work.
When launching a new strategy, define the specific go/no-go decision criteria on paper from day one. This prevents "revisionist history" where success metrics are redefined later based on new fact patterns or biases. This practice forces discipline and creates clear accountability for future reviews.
To combat confirmation bias, withhold the final results of an experiment or analysis until the entire team agrees the methodology is sound. This prevents people from subconsciously accepting expected outcomes while overly scrutinizing unexpected ones, leading to more objective conclusions.
Treat government programs as experiments. Define success metrics upfront and set a firm deadline. If the program fails to achieve its stated goals by that date, it should be automatically disbanded rather than being given more funding. This enforces accountability.
Counteract the tendency for the highest-paid person's opinion (HIPPO) to dominate decisions. Position all stakeholder ideas, regardless of seniority, as valid hypotheses to be tested. This makes objective data, not job titles, the ultimate arbiter for website changes, fostering a more effective culture.
For any development problem, a program should either be based on strong existing evidence ("use it") or, if such evidence is absent, be designed as an experiment to generate new findings ("produce it"). This simple mantra avoids redundant research and ensures all spending either helps or learns.