AI safety organizations struggle to hire despite funding because their bar is exceptionally high. They need candidates who can quickly become research leads or managers, not just possess technical skills. This creates a bottleneck where many interested applicants with moderate experience can't make the cut.
Since modern AI is so new, no one has more than a few years of relevant experience. This levels the playing field. The best hiring strategy is to prioritize young, AI-native talent with a steep learning curve over senior engineers whose experience may be less relevant. Dynamism and adaptability trump tenure.
For programs like MATS, a tangible research artifact—a paper, project, or work sample—is the most crucial signal for applicants. This practical demonstration of skill and research taste outweighs formal credentials, age, or breadth of literature knowledge in the highly competitive selection process.
The intense talent war in AI is hyper-concentrated. All major labs are competing for the same cohort of roughly 150-200 globally-known, elite researchers who are seen as capable of making fundamental breakthroughs, creating an extremely competitive and visible talent market.
Theoretical knowledge is now just a prerequisite, not the key to getting hired in AI. Companies demand candidates who can demonstrate practical, day-one skills in building, deploying, and maintaining real, scalable AI systems. The ability to build is the new currency.
The primary bottleneck for successful AI implementation in large companies is not access to technology but a critical skills gap. Enterprises are equipping their existing, often unqualified, workforce with sophisticated AI tools—akin to giving a race car to an amateur driver. This mismatch prevents them from realizing AI's full potential.
While compute and capital are often cited as AI bottlenecks, the most significant limiting factor is the lack of human talent. There is a fundamental shortage of AI practitioners and data scientists, a gap that current university output and immigration policies are failing to fill, making expertise the most constrained resource.
There's a significant disconnect between interest in AI safety and available roles. Applications to programs like MATS are growing over 1.5x annually, and intro courses see 370% yearly growth, while the field itself grows at a much slower 25% per year, creating an increasingly competitive entry funnel.
As AI assistants lower the technical barrier for research, the bottleneck for progress is shifting from coding ("iterators") to management and scaling ("amplifiers"). People skills, management ability, and networking are becoming the most critical and in-demand traits for AI safety organizations.
MATS categorizes technical AI safety talent into three roles. "Connectors" create new research paradigms. "Iterators" are the hands-on researchers currently in highest demand. "Amplifiers" are the managers who scale teams, a role with rapidly growing importance.
In rapidly evolving fields like AI, pre-existing experience can be a liability. The highest performers often possess high agency, energy, and learning speed, allowing them to adapt without needing to unlearn outdated habits.