The most effective way to use AI in product discovery is not to delegate tasks to it like an "answer machine." Instead, treat it as a "thought partner." Use prompts that explicitly ask it to challenge your assumptions, turning it into a tool for critical thinking rather than a simple content generator.
The true challenge of AI for many businesses isn't mastering the technology. It's shifting the entire organization from a predictable "delivery" mindset to an "innovation" one that is capable of managing rapid experimentation and uncertainty—a muscle many established companies haven't yet built.
When Alexa AI first launched generative answers, the biggest hurdle wasn't just technology. It was moving the company culture from highly curated, predictable responses to accepting AI's inherent risks. This forced new, difficult conversations about risk tolerance among stakeholders.
An effective AI strategy requires a bifurcated plan. Product leaders must create one roadmap for leveraging AI internally to improve tools and efficiency, and a separate one for external, customer-facing products that drive growth. This dual-track approach is a new strategic imperative.
While senior leaders are trained to delegate execution, AI is an exception. Direct, hands-on use is non-negotiable for leadership. It demystifies the technology, reveals its counterintuitive flaws, and builds the empathy required to understand team challenges. Leaders who remain hands-off will be unable to guide strategy effectively.
Without a strong foundation in customer problem definition, AI tools simply accelerate bad practices. Teams that habitually jump to solutions without a clear "why" will find themselves building rudderless products at an even faster pace. AI makes foundational product discipline more critical, not less.
An AI model can meet all technical criteria (correctness, relevance) yet produce outputs that are tonally inappropriate or off-brand. Ex-Alexa PM Polly Allen shared how a factually correct answer about COVID was insensitive, proving product leaders must inject human judgment into AI evaluation.
When facing top-down pressure to "do AI," leaders can regain control by framing the decision as a choice between distinct "games": 1) building foundational models, 2) being first-to-market with features, or 3) an internal efficiency play. This forces alignment on a North Star metric and provides a clear filter for random ideas.
