Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The VP of Search stated that technical interviews must now assess a candidate's ability to use AI coding assistants effectively. The goal is to measure not only problem-solving skills but also fluency with new tools that change how the job is performed, going beyond simply asking un-googleable questions.

Related Insights

The standard for being "AI fluent" has evolved past being a "prompt engineer." The new hiring benchmark is whether a candidate has recently brought a commercial AI tool into their organization. This demonstrates a practical, results-oriented ability to leverage AI, not just experiment with it.

To accurately assess candidates, interviews must be split. One part must be a "Zero AI" test to evaluate raw problem-solving ability and foundational knowledge, complete with cheat detection. The other part must be an "AI-Max" test to assess their skill in leveraging AI tools to be a "roboticist."

Rather than creating assessments that prohibit AI use, hiring managers should embrace it. A candidate's ability to leverage tools like ChatGPT to complete a project is a more accurate predictor of their future impact than their ability to perform tasks without them.

A top VC's most important interview question is now "How have you used AI in your daily life this week?" The key is identifying individuals who are running towards the new technology and embracing change. This mindset is uncorrelated with age or seniority, making it the most critical hiring signal.

Dreamer's hiring process now evaluates an engineer's ability to work with and through AI coding agents. Beyond a basic coding screen, the main interview involves a project built using tools like Codex, testing the candidate's skill in prompting, reviewing, and orchestrating AI to be productive.

Dr. Fei-Fei Li states she won't hire any software engineer who doesn't embrace AI collaborative tools. This isn't about the tools' perfection, but what their adoption signals: a candidate's open-mindedness, ability to grow with new toolkits, and potential to "superpower" their own work.

Since coding agents can perform like junior engineers, the value of simply writing code quickly and correctly is diminishing. The new critical skill for engineers is the ability to judge AI-generated code, architect systems, and effectively steer agents to implement a high-level design.

Glean has updated its interview process to screen for "AI fluency" across all departments. They don't expect expertise. Instead, they test for curiosity and initiative by asking candidates how they've personally used AI, looking for a mindset that embraces new ways of working.

Since AI assistants make it easy for candidates to complete take-home coding exercises, simply evaluating the final product is no longer an effective screening method. The new best practice is to require candidates to build with AI and then explain their thought process, revealing their true engineering and problem-solving skills.

Traditional hiring assessments that ban modern tools are obsolete. A better approach is to give candidates access to AI tools and ask them to complete a complex task in an hour. This tests their ability to leverage technology for productivity, not their ability to memorize information.