Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Setting operational KPIs for AI usage is risky. The technology is volatile, and incentives can backfire, like the famous 'cobra effect' story. Instead of measuring AI usage directly, leaders should keep focusing on core business goals and treat AI as a means to achieve them, not an end in itself.

Related Insights

When reporting on AI experiments to the board, avoid using "learning" as a primary KPI, as it can sound like an excuse for failure. Instead, translate those learnings into tangible outcomes and demonstrable progress toward goals, showing what impact the learning has and promises.

Effective AI adoption isn't about force-fitting a new technology into a workflow. Leaders should start by identifying a significant business challenge, then assemble an agile team of business experts and technologists to apply AI as a targeted solution, ensuring the effort is driven by real-world value.

Technical metrics like "accuracy" are often the wrong measure for ML projects and can mismanage expectations. To achieve success, projects must be evaluated using business KPIs like profit, savings, or ROI. This aligns data science with business goals and reveals the true value of imperfect predictions.

Unlike traditional software, AI adoption is not about RFPs and licenses but a fundamental mindset shift. It requires leaders to champion curiosity and experimentation. Treating AI like a standard IT project ignores the necessary changes in workflow and thinking, guaranteeing failure.

Adopting AI hasn't changed core business metrics like growth or retention. Its true value is in operational efficiency, allowing teams to analyze data more deeply. AI provides the ability to explore 'second and third level questions' and investigate previously inaccessible KPIs, improving the *how* without altering the *what*.

Demanding a direct, line-item ROI for foundational AI initiatives is like asking for the ROI on Wi-Fi—it's the wrong question. Instead of getting bogged down in impossible calculations, leaders should focus on measuring the business outcomes enabled by the technology, such as innovation speed or new product creation. Obsess on outcomes, not direct financial return.

Creating an "AI initiative" can be a mistake, as it encourages tool usage for its own sake. A better approach is to set the expectation that team members will deliver the best possible outcome, knowing AI exists, shifting the focus from process to high-quality results.

Previously, leaders carefully weighed the ROI of pursuing new features. With AI, building and testing ideas is so rapid that the strategic focus must shift. The greater risk is not a failed experiment, but failing to experiment at all. Organizations should measure the opportunity cost of not embracing AI-driven speed.

Don't rely on traditional project milestones to gauge AI progress. Instead, measure success through granular unit economics and operational metrics. Metrics like 'cost per release' or 'cycle time per feature' provide immediate feedback on whether your strategic hypothesis is valid, enabling rapid iteration.

Despite AI being core to their business, Andrew Sachs urges product leaders to be cautious. He highlights that pressure to use AI leads to misapplication and failure. True value comes from applying it strategically where it makes business sense, not from chasing buzzwords.