Contrary to the perception that AI safety is dominated by seasoned PhDs, the talent pipeline is diverse in age and credentials. The MATS program's median fellow is 27, and a significant portion (20%) are undergraduates, while only 15% hold PhDs, indicating multiple entry points into the field.
Since modern AI is so new, no one has more than a few years of relevant experience. This levels the playing field. The best hiring strategy is to prioritize young, AI-native talent with a steep learning curve over senior engineers whose experience may be less relevant. Dynamism and adaptability trump tenure.
AI safety organizations struggle to hire despite funding because their bar is exceptionally high. They need candidates who can quickly become research leads or managers, not just possess technical skills. This creates a bottleneck where many interested applicants with moderate experience can't make the cut.
For programs like MATS, a tangible research artifact—a paper, project, or work sample—is the most crucial signal for applicants. This practical demonstration of skill and research taste outweighs formal credentials, age, or breadth of literature knowledge in the highly competitive selection process.
A key to OpenAI's innovation is hiring young talent who grew up thinking natively about AI. These individuals "hold the model weights in their brains," enabling creative breakthroughs. The team behind the video model Sora, for instance, has a median age in the low twenties.
The MATS program demonstrates a high success rate in transitioning participants into the AI safety ecosystem. A remarkable 80% of its 446 alumni have secured permanent jobs in the field, including roles as independent researchers, highlighting the program's effectiveness as a career launchpad.
Ryan Kidd of MATS, a major AI safety talent pipeline, uses a 2033 median AGI timeline from prediction markets like Metaculous for strategic planning. This provides a concrete, data-driven anchor for how a key organization in the space views timelines, while still preparing for shorter, more dangerous scenarios.
When building core AI technology, prioritize hiring 'AI-native' recent graduates over seasoned veterans. These individuals often possess a fearless execution mindset and a foundational understanding of new paradigms that is critical for building from the ground up, countering the traditional wisdom of hiring for experience.
There's a significant disconnect between interest in AI safety and available roles. Applications to programs like MATS are growing over 1.5x annually, and intro courses see 370% yearly growth, while the field itself grows at a much slower 25% per year, creating an increasingly competitive entry funnel.
As AI assistants lower the technical barrier for research, the bottleneck for progress is shifting from coding ("iterators") to management and scaling ("amplifiers"). People skills, management ability, and networking are becoming the most critical and in-demand traits for AI safety organizations.
MATS categorizes technical AI safety talent into three roles. "Connectors" create new research paradigms. "Iterators" are the hands-on researchers currently in highest demand. "Amplifiers" are the managers who scale teams, a role with rapidly growing importance.