MATS categorizes technical AI safety talent into three roles. "Connectors" create new research paradigms. "Iterators" are the hands-on researchers currently in highest demand. "Amplifiers" are the managers who scale teams, a role with rapidly growing importance.
As AI assistants lower the technical barrier for research, the bottleneck for progress is shifting from coding ("iterators") to management and scaling ("amplifiers"). People skills, management ability, and networking are becoming the most critical and in-demand traits for AI safety organizations.
Technical research is vital for governance because it provides concrete artifacts for policymakers. Demonstrations and evaluations showing dangerous AI behaviors make abstract risks tangible, giving policymakers a clear target for regulation, aligning with advice from figures like Jake Sullivan.
For programs like MATS, a tangible research artifact—a paper, project, or work sample—is the most crucial signal for applicants. This practical demonstration of skill and research taste outweighs formal credentials, age, or breadth of literature knowledge in the highly competitive selection process.
Access to frontier models is not a prerequisite for impactful AI safety research, particularly in interpretability. Open-source models like Llama or Qwen are now powerful enough ("above the waterline") to enable world-class research, democratizing the field beyond just the major labs.
Contrary to the perception that AI safety is dominated by seasoned PhDs, the talent pipeline is diverse in age and credentials. The MATS program's median fellow is 27, and a significant portion (20%) are undergraduates, while only 15% hold PhDs, indicating multiple entry points into the field.
The MATS program demonstrates a high success rate in transitioning participants into the AI safety ecosystem. A remarkable 80% of its 446 alumni have secured permanent jobs in the field, including roles as independent researchers, highlighting the program's effectiveness as a career launchpad.
Ryan Kidd argues that it's nearly impossible to separate AI safety and capabilities work. Safety improvements, like RLHF, make models more useful and steerable, which in turn accelerates demand for more powerful "engines." This suggests that pure "safety-only" research is a practical impossibility.
Ryan Kidd of MATS, a major AI safety talent pipeline, uses a 2033 median AGI timeline from prediction markets like Metaculous for strategic planning. This provides a concrete, data-driven anchor for how a key organization in the space views timelines, while still preparing for shorter, more dangerous scenarios.
Working on AI safety at major labs like Anthropic or OpenAI does not come with a salary penalty. These roles are compensated at the same top-tier rates as capabilities-focused positions, with mid-level and senior researchers likely earning over $1 million, effectively eliminating any financial "alignment tax."
Research with long timelines (e.g., a "2063 scenario") is still worth pursuing, as these technical plans can be compressed into a short period by future AI assistants. Seeding these directions now raises the "waterline of understanding" for future AI-accelerated alignment efforts, making them viable even on shorter timelines.
AI safety organizations struggle to hire despite funding because their bar is exceptionally high. They need candidates who can quickly become research leads or managers, not just possess technical skills. This creates a bottleneck where many interested applicants with moderate experience can't make the cut.
There's a significant disconnect between interest in AI safety and available roles. Applications to programs like MATS are growing over 1.5x annually, and intro courses see 370% yearly growth, while the field itself grows at a much slower 25% per year, creating an increasingly competitive entry funnel.
