When AI safety researchers leave companies like OpenAI with concerns, they post vague messages not for drama but to avoid violating strict non-disparagement agreements. Breaking these agreements could force them to forfeit millions in vested equity.
Leaked exchanges show OpenAI leadership felt "betrayed" when early investor Reid Hoffman started rival Inflection AI. This prompted them to consider asking new investors for a "soft promise" not to fund competitors, a highly unusual and restrictive term in venture capital.
The constant shuffling of key figures between OpenAI, Anthropic, and Google highlights that the most valuable asset in the AI race is a small group of elite researchers. These individuals can easily switch allegiances for better pay or projects, creating immense instability for even the most well-funded companies.
When Thinking Machines' CTO departed for OpenAI, the company cited "unethical conduct." Insiders speculate this is a "snaky PR move" or "character assassination leak" to control the narrative as talent poaching intensifies among AI labs.
The risk of AI companionship isn't just user behavior; it's corporate inaction. Companies like OpenAI have developed classifiers to detect when users are spiraling into delusion or emotional distress, but evidence suggests this safety tooling is left "on the shelf" to maximize engagement.
In the hyper-competitive AI talent market, companies like OpenAI are dropping the standard one-year vesting cliff. With equity packages worth millions, top candidates are unwilling to risk getting nothing if they leave before 12 months, forcing a shift in compensation norms.
The drama at Thinking Machines, where co-founders were fired and immediately rejoined OpenAI, shows the extreme volatility of AI startups. Top talent holds immense leverage, and personal disputes can quickly unravel a company as key players have guaranteed soft landings back at established labs, making retention incredibly difficult.
Many top AI CEOs openly admit the extinction-level risks of their work, with some estimating a 25% chance. However, they feel powerless to stop the race. If a CEO paused for safety, investors would simply replace them with someone willing to push forward, creating a systemic trap where everyone sees the danger but no one can afford to hit the brakes.
Top AI labs face a difficult talent problem: if they restrict employee equity liquidity, top talent leaves for higher salaries. If they provide too much liquidity, newly-wealthy researchers leave to found their own competing startups, creating a constant churn that seeds the ecosystem with new rivals.
OpenAI previously had highly restrictive exit agreements that could claw back an employee's vested equity if they refused to sign a non-disparagement clause. This practice highlights how companies can use financial leverage to silence former employees, a tactic that became particularly significant during the CEO ousting controversy.
The "golden era" of big tech AI labs publishing open research is over. As firms realize the immense value of their proprietary models and talent, they are becoming as secretive as trading firms. The culture is shifting toward protecting IP, with top AI researchers even discussing non-competes, once a hallmark of finance.