We scan new podcasts and send you the top 5 insights daily.
Deep, intense usage can be an anti-metric for productivity tools, suggesting user friction. The key is establishing a daily or weekly habit (frequency), as monthly usage falls into the "forgettable zone." The action tracked for frequency should be meaningful, not a vanity metric like logins.
Vanity metrics like total revenue can be misleading. A startup might acquire many low-priced, low-usage customers without solving a core problem. Deep, consistent user engagement statistics are a much stronger indicator of genuine, 'found' demand than top-line numbers alone.
Instead of focusing solely on conversion rates, measure 'engagement quality'—metrics that signal user confidence, like dwell time, scroll depth, and journey progression. The philosophy is that if you successfully help users understand the content and feel confident, conversions will naturally follow as a positive side effect.
Since today's AI companies grow too fast to have multi-year renewal data, investors must adapt their diligence. The focus shifts from long-term retention to short-cycle retention and, crucially, deep product engagement. High usage is the best leading indicator of future stickiness and value.
The biggest initial hurdle for a new product isn't getting the first dollar of revenue; it's crossing the chasm from a user trying the product once to becoming a truly engaged, repeat user. This "penny gap of engagement" is the most critical early milestone to overcome for long-term success.
Unlike passive consumption apps, where getting many users to try a feature once is key, high-intent products like Google Search measure success by user intensity. The critical question is not "how many people used it?" but "are individual users using it more intensely over time?"
Don't jump directly to optimizing for high-level business outcomes like retention. Instead, sequence your North Star metric. First, focus the team on driving foundational user engagement. Only after establishing that behavior should you shift the primary metric to a direct business impact like revenue or retention.
Product performance isn't one metric; it's the sum of all touchpoints, from support tickets to app reviews. These disparate inputs all roll up into the ultimate North Star metric: user engagement.
Because AI products improve so rapidly, it's crucial to proactively bring lapsed users back. A user who tried the product a year ago has no idea how much better it is today. Marketing pushes around major version launches (e.g., v3.0) can create a step-change in weekly active users.
Instead of focusing solely on CSAT or transaction completion, a more powerful KPI for AI effectiveness is repeat usage. When customers voluntarily return to the same AI-powered channel (e.g., a chatbot) to solve a problem, it signals the experience was so effective it became their preferred method.
Instead of focusing on a slowly declining retention curve, look for the curve to flatten or even tick upwards over 30-90 days. This "J-curve" indicates that a core group of users is forming a stable habit, a stronger signal of PMF than initial user numbers.