At Lovable, the growth team barely focuses on activation, a typical growth lever. Instead, the core product and AI agent teams own this obsessively. Because the initial AI-generated output *is* the activation moment, its quality is a fundamental product challenge, not a surface-level optimization problem for growth.

Related Insights

For AI-native products where the primary interface is just a prompt box, the traditional role of a growth team in optimizing activation diminishes. The entire activation experience happens via conversation with an AI agent, making it an inseparable part of the core product's responsibility, not a separate optimization layer.

Before launch, product leaders must ask if their AI offering is a true product or just a feature. Slapping an AI label on a tool that automates a minor part of a larger workflow is a gimmick. It will fail unless it solves a core, high-friction problem for the customer in its entirety.

Many teams wrongly focus on the latest models and frameworks. True improvement comes from classic product development: talking to users, preparing better data, optimizing workflows, and writing better prompts.

Unlike traditional software that optimizes for time-in-app, the most successful AI products will be measured by their ability to save users time. The new benchmark for value will be how much cognitive load or manual work is automated "behind the scenes," fundamentally changing the definition of a successful product.

Don't just sprinkle AI features onto your existing product ('AI at the edge'). Transformative companies rethink workflows and shrink their old codebase, making the LLM a core part of the solution. This is about re-architecting the solution from the ground up, not just enhancing it.

In the fast-moving AI space, optimizing existing user journeys yields minimal returns. Lovable's growth team inverts the typical model, focusing 95% of its effort on innovating and creating new growth loops and product features, rather than incremental optimization.

The current AI hype cycle can create misleading top-of-funnel metrics. The only companies that will survive are those demonstrating strong, above-benchmark user and revenue retention. It has become the ultimate litmus test for whether a product provides real, lasting value beyond the initial curiosity.

Traditional product metrics like DAU are meaningless for autonomous AI agents that operate without user interaction. Product teams must redefine success by focusing on tangible business outcomes. Instead of tracking agent usage, measure "support tickets automatically closed" or "workflows completed."

The most durable growth comes from seeing your job as connecting users to the product's value. This reframes the work away from short-term, transactional metric hacking toward holistically improving the user journey, which builds a healthier business.

Because AI products improve so rapidly, it's crucial to proactively bring lapsed users back. A user who tried the product a year ago has no idea how much better it is today. Marketing pushes around major version launches (e.g., v3.0) can create a step-change in weekly active users.