MATS categorizes technical AI safety talent into three roles. "Connectors" create new research paradigms. "Iterators" are the hands-on researchers currently in highest demand. "Amplifiers" are the managers who scale teams, a role with rapidly growing importance.

Related Insights

Beyond traditional engineers using AI and non-technical "vibe coders," a third archetype is emerging: the "agentic engineer." This professional operates at a higher level of abstraction, managing AI agents to perform programming, rather than writing or even reading the code themselves, reinventing the engineering skill set.

AI safety organizations struggle to hire despite funding because their bar is exceptionally high. They need candidates who can quickly become research leads or managers, not just possess technical skills. This creates a bottleneck where many interested applicants with moderate experience can't make the cut.

Simply hiring superstar "Galacticos" is an ineffective team-building strategy. A successful AI team requires a deliberate mix of three archetypes: visionaries who set direction, rigorous executors who ship product, and social "glue" who maintain team cohesion and morale.

As AI evolves from single-task tools to autonomous agents, the human role transforms. Instead of simply using AI, professionals will need to manage and oversee multiple AI agents, ensuring their actions are safe, ethical, and aligned with business goals, acting as a critical control layer.

As AI tools become operable via plain English, the key skill shifts from technical implementation to effective management. People managers excel at providing context, defining roles, giving feedback, and reporting on performance—all crucial for orchestrating a "team" of AI agents. Their skills will become more valuable than pure AI expertise.

Top-performing engineering teams are evolving from hands-on coding to a managerial role. Their primary job is to define tasks, kick off multiple AI agents in parallel, review plans, and approve the final output, rather than implementing the details themselves.

The traditional tech team structure of separate product, engineering, and design roles is becoming obsolete. AI startups favor small teams of 'polymaths'—T-shaped builders who can contribute across disciplines. This shift values broad, hands-on capability over deep specialization for most early-stage roles.

Contrary to the perception that AI safety is dominated by seasoned PhDs, the talent pipeline is diverse in age and credentials. The MATS program's median fellow is 27, and a significant portion (20%) are undergraduates, while only 15% hold PhDs, indicating multiple entry points into the field.

As AI assistants lower the technical barrier for research, the bottleneck for progress is shifting from coding ("iterators") to management and scaling ("amplifiers"). People skills, management ability, and networking are becoming the most critical and in-demand traits for AI safety organizations.

AI will handle most routine tasks, reducing the number of average 'doers'. Those remaining will be either the absolute best in their craft or individuals leveraging AI for superhuman productivity. Everyone else must shift to 'director' roles, focusing on strategy, orchestration, and interpreting AI output.