When using prioritization frameworks like RICE for AI-generated ideas, human oversight is crucial. The 'Confidence' score for a feature ideated by AI should be intentionally set low. This forces the team to conduct real user testing before gaining confidence, preventing unverified AI suggestions from being fast-tracked.

Related Insights

Use a two-axis framework to determine if a human-in-the-loop is needed. If the AI is highly competent and the task is low-stakes (e.g., internal competitor tracking), full autonomy is fine. For high-stakes tasks (e.g., customer emails), human review is essential, even if the AI is good.

For AI products, the quality of the model's response is paramount. Before building a full feature (MVP), first validate that you can achieve a 'Minimum Viable Output' (MVO). If the core AI output isn't reliable and desirable, don't waste time productizing the feature around it.

Product managers should leverage AI to get 80% of the way on tasks like competitive analysis, but must apply their own intellect for the final 20%. Fully abdicating responsibility to AI can lead to factual errors and hallucinations that, if used to build a product, result in costly rework and strategic missteps.

Instead of waiting for AI models to be perfect, design your application from the start to allow for human correction. This pragmatic approach acknowledges AI's inherent uncertainty and allows you to deliver value sooner by leveraging human oversight to handle edge cases.

Don't wait for AI to be perfect. The correct strategy is to apply current AI models—which are roughly 60-80% accurate—to business processes where that level of performance is sufficient for a human to then review and bring to 100%. Chasing perfection in-house is a waste of resources given the pace of model improvement.

AI tools can handle administrative and analytical tasks for product managers, like summarizing notes or drafting stories. However, they lack the essential human elements of empathy, nuanced judgment, and creativity required to truly understand user problems and make difficult trade-off decisions.

AI's unpredictability requires more than just better models. Product teams must work with researchers on training data and specific evaluations for sensitive content. Simultaneously, the UI must clearly differentiate between original and AI-generated content to facilitate effective human oversight.

To effectively leverage AI, treat it as a new team member. Take its suggestions seriously and give it the best opportunity to contribute. However, just like with a human colleague, you must apply a critical filter, question its output, and ultimately remain accountable for the final result.

AI models tend to be overly optimistic. To get a balanced market analysis, explicitly instruct AI research tools like Perplexity to act as a "devil's advocate." This helps uncover risks, challenge assumptions, and makes it easier for product managers to say "no" to weak ideas quickly.

The temptation to use AI to rapidly generate, prioritize, and document features without deep customer validation poses a significant risk. This can scale the "feature factory" problem, allowing teams to build the wrong things faster than ever, making human judgment and product thinking paramount.

PMs Must Lower the 'Confidence' Score in RICE Frameworks for AI-Generated Features | RiffOn