In ROI-focused cultures like financial services, protect innovation by dedicating a formal budget (e.g., 20% of team bandwidth) to experiments. These initiatives are explicitly exempt from the rigorous ROI calculations applied to the rest of the roadmap, which fosters necessary risk-taking.

Related Insights

To avoid constant battles over unproven ideas, proactively allocate 5-10% of the marketing budget to a line item officially called "Marketing Experiments." Frame it to the CFO as a necessary fund for exploring new channels before current ones tap out and for seizing unforeseen opportunities.

Instead of fearing failure, Ridge institutionalizes it by allocating a $1M annual budget specifically for testing new product expansions. This removes pressure from any single launch, encourages aggressive experimentation, and has led to eight-figure successes alongside predictable flops like watches.

AI initiatives often require significant learning and iteration, which can derail a roadmap. To combat this, PMs should dedicate a fixed percentage of development bandwidth (e.g., 5-10%) specifically for iteration on high-priority AI projects. This creates a structured buffer for discovery without compromising the entire plan.

Foster a culture of experimentation by reframing failure. A test where the hypothesis is disproven is just as valuable as a 'win' because it provides crucial user insights. The program's success should be measured by the quantity of quality tests run, not the percentage of successful hypotheses.

The rapid pace of AI makes traditional, static marketing playbooks obsolete. Leaders should instead foster a culture of agile testing and iteration. This requires shifting budget from a 70-20-10 model (core-emerging-experimental) to something like 60-20-20 to fund a higher velocity of experimentation.

Organizations fail when they push teams directly into using AI for business outcomes ("architect mode"). Instead, they must first provide dedicated time and resources for unstructured play ("sandbox mode"). This experimentation phase is essential for building the skills and comfort needed to apply AI effectively to strategic goals.

To avoid distracting from its core business, Bolt tests new ventures like scooters and food delivery using a standardized playbook. A small team of 5-10 people is given a modest budget and a six-month timeline to build an MVP and show traction. If successful, they get more funding; if not, the project is shut down.

Chess.com's goal of 1,000 experiments isn't about the number. It’s a forcing function to expose systemic blockers and drive conversations about what's truly needed to increase velocity, like no-code tools and empowering non-product teams to test ideas.

To ensure continuous experimentation, Coastline's marketing head allocates a specific "failure budget" for high-risk initiatives. The philosophy is that most experiments won't work, but the few that do will generate enough value to cover all losses and open up crucial new marketing channels.

To balance execution with innovation, allocate 70% of resources to high-confidence initiatives, 20% to medium-confidence bets with significant upside, and 10% to low-confidence, "game-changing" experiments. This ensures delivery on core goals while pursuing high-growth opportunities.