We scan new podcasts and send you the top 5 insights daily.
Don't rely on traditional project milestones to gauge AI progress. Instead, measure success through granular unit economics and operational metrics. Metrics like 'cost per release' or 'cycle time per feature' provide immediate feedback on whether your strategic hypothesis is valid, enabling rapid iteration.
Beyond improving traditional marketing metrics, a crucial new shared KPI for the CMO-CIO partnership is "Time to Value." This measures the efficiency of AI pilot selection, execution, and scaling, ensuring the collaboration delivers on AI's promise of speed without getting bogged down by process or governance hurdles.
To quantify the real-world impact of its AI tools, Block tracks a simple but powerful metric: "manual hours saved." This KPI combines qualitative and quantitative signals to provide a clear measure of ROI, with a target to save 25% of manual hours across the company.
For early-stage AI companies, performance should be measured by the speed of iteration, shipping, and learning, not just traditional metrics like revenue. In a rapidly evolving landscape, the ability to quickly get signals from the market and adapt is the primary indicator of future success.
Unlike traditional software that optimizes for time-in-app, the most successful AI products will be measured by their ability to save users time. The new benchmark for value will be how much cognitive load or manual work is automated "behind the scenes," fundamentally changing the definition of a successful product.
Snowflake's CEO advises against seeking a huge ROI on the first AI project. Instead, companies should run many small, inexpensive experiments—taking multiple "shots on goal"—to learn the landscape and build momentum. This approach proves value incrementally rather than relying on one big bet.
Standardized benchmarks for AI models are largely irrelevant for business applications. Companies need to create their own evaluation systems tailored to their specific industry, workflows, and use cases to accurately assess which new model provides a tangible benefit and ROI.
Traditional product metrics like DAU are meaningless for autonomous AI agents that operate without user interaction. Product teams must redefine success by focusing on tangible business outcomes. Instead of tracking agent usage, measure "support tickets automatically closed" or "workflows completed."
Instead of abstract productivity metrics, define your AI goal in terms of concrete headcount avoidance. Sensei's objective is to achieve the output of a 700-person company with half the staff by using AI to bridge the gap. This makes the ROI tangible and aligns AI investment with scalable, capital-efficient growth.
Instead of traditional product requirements documents, AI PMs should define success through a set of specific evaluation metrics. Engineers then work to improve the system's performance against these evals in a "hill climbing" process, making the evals the functional specification for the product.
Vanity metrics like "AI lines of code" are misleading. Coinbase measures AI success by its impact on the end-to-end development cycle: the total time from a ticket's creation to the change landing with a user. This metric holistically captures gains and focuses the team on true velocity.