To assess a candidate's ability to use AI as a thinking partner, have them solve a problem with an LLM. The key is observing their follow-up prompts and their ability to guide the AI step-by-step, rather than just accepting the initial output.
Standard benchmarks fall short for multi-turn AI agents. A new approach is the 'job interview eval,' where an agent is given an underspecified problem. It is then graded not just on the solution, but on its ability to ask clarifying questions and handle changing requirements, mimicking how a human developer is evaluated.
Instead of asking AI for answers, command it to ask you questions. Use the "Context, Role, Interview, Task" (CRIT) framework to turn AI into a thought partner. The "Interview" step, where AI probes for deeper context, is the key to generating non-obvious, high-value strategies.
Rather than creating assessments that prohibit AI use, hiring managers should embrace it. A candidate's ability to leverage tools like ChatGPT to complete a project is a more accurate predictor of their future impact than their ability to perform tasks without them.
In AI PM interviews, 'vibe coding' isn't a technical test. Interviewers evaluate your product thinking through how you structure prompts, the user insights you bring to iterations, and your ability to define feedback loops, not your ability to write code.
Move beyond simple prompts by designing detailed interactions with specific AI personas, like a "critic" or a "big thinker." This allows teams to debate concepts back and forth, transforming AI from a task automator into a true thought partner that amplifies rigor.
To assess a product manager's AI skills, integrate AI into your standard hiring process rather than just asking theoretical questions. Expect candidates to use AI tools in take-home case studies and analytical interviews to test for practical application and raise the quality bar.
Effective prompt engineering isn't a purely technical skill. It mirrors how we delegate tasks and ask questions to human coworkers. To improve AI collaboration, organizations must first improve interpersonal communication and listening skills among employees.
To build an AI-native team, shift the hiring process from reviewing resumes to evaluating portfolios of work. Ask candidates to demonstrate what they've built with AI, their favorite prompt techniques, and apps they wish they could create. This reveals practical skill over credentialism.
Since AI assistants make it easy for candidates to complete take-home coding exercises, simply evaluating the final product is no longer an effective screening method. The new best practice is to require candidates to build with AI and then explain their thought process, revealing their true engineering and problem-solving skills.
Traditional hiring assessments that ban modern tools are obsolete. A better approach is to give candidates access to AI tools and ask them to complete a complex task in an hour. This tests their ability to leverage technology for productivity, not their ability to memorize information.