Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Vanity metrics like "AI lines of code" are misleading. Coinbase measures AI success by its impact on the end-to-end development cycle: the total time from a ticket's creation to the change landing with a user. This metric holistically captures gains and focuses the team on true velocity.

Related Insights

Beyond improving traditional marketing metrics, a crucial new shared KPI for the CMO-CIO partnership is "Time to Value." This measures the efficiency of AI pilot selection, execution, and scaling, ensuring the collaboration delivers on AI's promise of speed without getting bogged down by process or governance hurdles.

To quantify the real-world impact of its AI tools, Block tracks a simple but powerful metric: "manual hours saved." This KPI combines qualitative and quantitative signals to provide a clear measure of ROI, with a target to save 25% of manual hours across the company.

For early-stage AI companies, performance should be measured by the speed of iteration, shipping, and learning, not just traditional metrics like revenue. In a rapidly evolving landscape, the ability to quickly get signals from the market and adapt is the primary indicator of future success.

Unlike traditional software that optimizes for time-in-app, the most successful AI products will be measured by their ability to save users time. The new benchmark for value will be how much cognitive load or manual work is automated "behind the scenes," fundamentally changing the definition of a successful product.

With infinitely scalable AI agents, cost and time per interaction are no longer primary constraints. Companies should abandon classic efficiency metrics like Average Handle Time and instead measure success by outcomes, such as percentage of tasks completed and improvements in Customer Satisfaction (CSAT).

The primary ROI of sales AI isn't just saved time, but the reallocation of that time. Evaluate and justify AI tools based on their ability to maximize Customer Facing Time (CFT), as this directly increases both the quantity and quality of customer interactions, leading to better performance.

Traditional product metrics like DAU are meaningless for autonomous AI agents that operate without user interaction. Product teams must redefine success by focusing on tangible business outcomes. Instead of tracking agent usage, measure "support tickets automatically closed" or "workflows completed."

Open and click rates are ineffective for measuring AI-driven, two-way conversations. Instead, leaders should adopt new KPIs: outcome metrics (e.g., meetings booked), conversational quality (tracking an agent's 'I don't know' rate to measure trust), and, ultimately, customer lifetime value.

AI tools can generate vast amounts of verbose code on command, making metrics like 'lines of code' easily gameable and meaningless for measuring true engineering productivity. This practice introduces complexity and technical debt rather than indicating progress.

Shift the team's language and metrics away from output. Instead of celebrating a deployed API, measure and report on what that API enabled for other teams and the business. This directly connects platform work to tangible results and impact.