Traditional hiring assessments that ban modern tools are obsolete. A better approach is to give candidates access to AI tools and ask them to complete a complex task in an hour. This tests their ability to leverage technology for productivity, not their ability to memorize information.

Related Insights

Formal AI competency frameworks are still emerging. In their place, innovative companies are assessing employee AI skills with concrete, activity-based targets like "build three custom GPTs for your role" or completing specific certifications, directly linking these achievements to performance reviews.

In an era where AI can assist with coding challenges, 10X's solution is to make their take-home assignments exceptionally difficult. This approach immediately filters out 50% of candidates who don't even respond, allowing for a much faster and more focused interview process for the elite few who pass.

Don't hire based on today's job description. Proactively run AI impact assessments to project how a role will evolve over the next 12-18 months. This allows you to hire for durable, human-centric skills and plan how to reallocate the 30%+ of their future capacity that will be freed up by AI agents.

Dr. Fei-Fei Li states she won't hire any software engineer who doesn't embrace AI collaborative tools. This isn't about the tools' perfection, but what their adoption signals: a candidate's open-mindedness, ability to grow with new toolkits, and potential to "superpower" their own work.

In AI PM interviews, 'vibe coding' isn't a technical test. Interviewers evaluate your product thinking through how you structure prompts, the user insights you bring to iterations, and your ability to define feedback loops, not your ability to write code.

Recognizing that providing tools is insufficient, LinkedIn is making "AI agency and fluency" a core part of its performance evaluation and calibration process. This formalizes the expectation that employees must actively use AI tools to succeed, moving adoption from voluntary to a career necessity.

In rapidly evolving fields like AI, pre-existing experience can be a liability. The highest performers often possess high agency, energy, and learning speed, allowing them to adapt without needing to unlearn outdated habits.

For cutting-edge AI problems, innate curiosity and learning speed ("velocity") are more important than existing domain knowledge. Echoing Karpathy, a candidate with a track record of diving deep into complex topics, regardless of field, will outperform a skilled but less-driven specialist.

In a paradigm shift like AI, an experienced hire's knowledge can become obsolete. It's often better to hire a hungry junior employee. Their lack of preconceived notions, combined with a high learning velocity powered by AI tools, allows them to surpass seasoned professionals who must unlearn outdated workflows.

Instead of waiting for external reports, companies should develop their own AI model evaluations. By defining key tasks for specific roles and testing new models against them with standard prompts, businesses can create a relevant, internal benchmark.