AI is great at identifying broad topics like "integration issues" from user feedback. However, true product insights come from specific, nuanced details that are often averaged away by LLMs. Human review is still required to spot truly actionable opportunities.
AI excels at clerical tasks like transcription and basic analysis. However, it lacks the business context to identify strategically important, "spiky" insights. Treat it like a new intern: give it defined tasks, but don't ask it to define your roadmap. It has no practical life experience.
Neither AI nor humans alone can uncover all customer needs. Research shows that while AI finds needs humans miss, it also overlooks things humans catch. The most comprehensive Voice of the Customer (VOC) results come from a hybrid approach that leverages the complementary strengths of both.
AI models are trained to be agreeable, often providing uselessly positive feedback. To get real insights, you must explicitly prompt them to be rigorous and critical. Use phrases like "my standards of excellence are very high and you won't hurt my feelings" to bypass their people-pleasing nature.
Don't ask an LLM to perform initial error analysis; it lacks the product context to spot subtle failures. Instead, have a human expert write detailed, freeform notes ("open codes"). Then, leverage an LLM's strength in synthesis to automatically categorize those hundreds of human-written notes into actionable failure themes ("axial codes").
AI tools can handle administrative and analytical tasks for product managers, like summarizing notes or drafting stories. However, they lack the essential human elements of empathy, nuanced judgment, and creativity required to truly understand user problems and make difficult trade-off decisions.
AI analysis tools tend to focus on the general topic of an interview, often overlooking tangential, unexpected "spiky" details. These anomalies, which pique a human researcher's curiosity, are frequently the source of the most significant product opportunities and breakthroughs.
While AI efficiently transcribes user interviews, true customer insight comes from ethnographic research—observing users in their natural environment. What people say is often different from their actual behavior. Don't let AI tools create a false sense of understanding that replaces direct observation.
When asked to describe a user process, an LLM provides the textbook version. It misses the real-world chaos—forgotten tasks, interruptions, and workarounds. These messy details, which only emerge from talking to real people, are where the most valuable product opportunities are found.
Developers often test AI systems with well-formed, correctly spelled questions. However, real users submit vague, typo-ridden, and ambiguous prompts. Directly analyzing these raw logs is the most crucial first step to understanding how your product fails in the real world and where to focus quality improvements.
AI tools like ChatGPT can analyze traces for basic correctness but miss subtle product experience failures. A product manager's contextual knowledge is essential to identify issues like improper formatting for a specific channel (e.g., markdown in SMS) or failures in user experience that an LLM would deem acceptable.