Candidates complete an exhaustive "friction logging" exercise, documenting pain points and user experience issues within a product. This practical test is a primary tool for evaluating a candidate's product sense and problem-identification skills, valued almost as much as the interview itself.
New AI tools often have flawed user experiences. Instead of just getting frustrated, create a detailed product breakdown with recommendations for improvement. Sending this to the company serves as a powerful "warm intro," showcasing your product skills and providing value before you're hired.
With LLMs making remote coding tests unreliable, the new standard is face-to-face interviews focused on practical problems. Instead of abstract algorithms, candidates are asked to fix failing tests or debug code, assessing their real-world problem-solving skills which are much harder to fake.
In AI PM interviews, 'vibe coding' isn't a technical test. Interviewers evaluate your product thinking through how you structure prompts, the user insights you bring to iterations, and your ability to define feedback loops, not your ability to write code.
To simulate interview coaching, feed your written answers to case study questions into an LLM. Prompt it to score you on a specific rubric (structured thinking, user focus, etc.), identify exact weak phrases, explain why, and suggest a better approach for structured, actionable feedback.
To assess a product manager's AI skills, integrate AI into your standard hiring process rather than just asking theoretical questions. Expect candidates to use AI tools in take-home case studies and analytical interviews to test for practical application and raise the quality bar.
A common hiring mistake is prioritizing a conversational 'vibe check' over assessing actual skills. A much better approach is to give candidates a project that simulates the job's core responsibilities, providing a direct and clean signal of their capabilities.
A common red flag in AI PM interviews is when candidates, particularly those from a machine learning background, jump directly to technical solutions. They fail by neglecting core PM craft: defining the user ('the who'), the problem ('the why'), and the metrics for success, which must come before any discussion of algorithms.
Upload interview transcripts and a job description into an AI tool. Program it to define the top criteria for the role and rate each candidate's transcript against them. This provides an objective analysis that counteracts personal affinity bias and reveals details missed during the live conversation.
Since AI assistants make it easy for candidates to complete take-home coding exercises, simply evaluating the final product is no longer an effective screening method. The new best practice is to require candidates to build with AI and then explain their thought process, revealing their true engineering and problem-solving skills.
To build a truly product-focused company, make the final interview for every role a product management-style assessment. Ask all candidates to suggest product improvements. This filters for a shared value and weeds out those who aren't user-obsessed, regardless of their function.