For elite AI researchers who are already wealthy, extravagant salaries are less compelling than a company's mission. Many job changes are driven by misalignments in values or a lack of faith in leadership, not by higher paychecks.
The constant shuffling of key figures between OpenAI, Anthropic, and Google highlights that the most valuable asset in the AI race is a small group of elite researchers. These individuals can easily switch allegiances for better pay or projects, creating immense instability for even the most well-funded companies.
Top AI labs face a difficult talent problem: if they restrict employee equity liquidity, top talent leaves for higher salaries. If they provide too much liquidity, newly-wealthy researchers leave to found their own competing startups, creating a constant churn that seeds the ecosystem with new rivals.
As OpenAI and Anthropic gear up to go public, the pressure to generate profit is mounting. This shift from pure research to building ad-driven, commercial products creates a culture clash, causing disillusioned engineers who joined for loftier goals to quit.
Once financial needs are met, top engineers are motivated by meaning and creativity, not incremental pay bumps. To retain them, leaders must create an environment where R&D teams feel they are genuinely innovating, beyond just executing a quarterly roadmap. This sense of mission is the key differentiator.
After reportedly turning down a $1.5B offer from Meta to stay at his startup Thinking Machines, Andrew Tulloch was allegedly lured back with a $3.5B package. This demonstrates the hyper-inflated and rapidly escalating cost of acquiring top-tier AI talent, where even principled "missionaries" have a mercenary price.
The very best engineers optimize for their most precious asset: their time. They are less motivated by competing salary offers and more by the quality of the team, the problem they're solving, and the agency to build something meaningful without becoming a "cog" in a machine.
The "Valinor" metaphor for top AI talent has evolved. It once meant leaving big labs for lucrative startups. Now, as talent returns to incumbents like OpenAI with massive pay packages, "Valinor" represents the safety and resources of the established players.
Working on AI safety at major labs like Anthropic or OpenAI does not come with a salary penalty. These roles are compensated at the same top-tier rates as capabilities-focused positions, with mid-level and senior researchers likely earning over $1 million, effectively eliminating any financial "alignment tax."
Despite Meta offering nine-figure bonuses to retain top AI employees, its chief AI scientist is leaving to launch his own startup. This proves that in a hyper-competitive field like AI, the potential upside and autonomy of being a founder can be more compelling than even the most extravagant corporate retention packages.
The CEO of ElevenLabs recounts a negotiation where a research candidate wanted to maximize their cash compensation over three years. Their rationale: they believed AGI would arrive within that timeframe, rendering their own highly specialized job—and potentially all human jobs—obsolete.