Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

For a hiring project, Artemis didn't review code. They asked candidates to build a functional website and share the live URL, explicitly not caring how it was built. This shifted the assessment from coding proficiency to the more crucial startup skill: the ability to build and deliver a result.

Related Insights

When hiring senior engineers, the crucial test is whether they can build. This means assessing their ability to take a real-world business problem—like designing a warehouse system—and translate it into a tangible technical solution. This skill separates true builders from theoretical programmers.

To find talent capable of managing an AI stack, traditional interviews are insufficient. A better test is to provide candidates with platform credits (e.g., Replit) and challenge them to build a functional agent that automates a real business task, proving their practical skills.

In AI PM interviews, 'vibe coding' isn't a technical test. Interviewers evaluate your product thinking through how you structure prompts, the user insights you bring to iterations, and your ability to define feedback loops, not your ability to write code.

A common hiring mistake is prioritizing a conversational 'vibe check' over assessing actual skills. A much better approach is to give candidates a project that simulates the job's core responsibilities, providing a direct and clean signal of their capabilities.

To build an AI-native team, shift the hiring process from reviewing resumes to evaluating portfolios of work. Ask candidates to demonstrate what they've built with AI, their favorite prompt techniques, and apps they wish they could create. This reveals practical skill over credentialism.

The most promising junior candidates are those who demonstrate self-learning by creating things they weren't asked to do, like a weekend app project. This signal of intrinsic motivation is more valuable than perfectly completed assignments.

When hiring, focus on what a person has created, not their stated attributes or background. A great "invention" (a project, a piece of writing, code) is the strongest signal of a great "inventor." This shifts the focus from potential to proven output, as Charlie Munger advised.

Since AI assistants make it easy for candidates to complete take-home coding exercises, simply evaluating the final product is no longer an effective screening method. The new best practice is to require candidates to build with AI and then explain their thought process, revealing their true engineering and problem-solving skills.

To filter for a bias for action, DoorDash gave candidates a work project: acquire 1,000 customers with $20. The impossible goal wasn't the point; the test was designed to see what candidates would *do*. Their creative and scrappy attempts revealed far more about their mindset than a traditional interview could.

Traditional hiring assessments that ban modern tools are obsolete. A better approach is to give candidates access to AI tools and ask them to complete a complex task in an hour. This tests their ability to leverage technology for productivity, not their ability to memorize information.

Test a Candidate's Ability to Ship, Not Just Their Ability to Code | RiffOn