When Thinking Machines' CTO departed for OpenAI, the company cited "unethical conduct." Insiders speculate this is a "snaky PR move" or "character assassination leak" to control the narrative as talent poaching intensifies among AI labs.
An influx of Meta alumni, now 20% of staff, is causing internal friction. A 'move fast' focus on user growth metrics is clashing with the original research-oriented culture that prioritized product quality over pure engagement, as exemplified by former CTO Mira Murati's reported reaction to growth-focused memos.
Companies like DeepMind, Meta, and SSI are using increasingly futuristic job titles like "Post-AGI Research" and "Safe Superintelligence Researcher." This isn't just semantics; it's a branding strategy to attract elite talent by framing their work as being on the absolute cutting edge, creating distinct sub-genres within the AI research community.
The detailed failure of the anti-Altman coup, planned for a year yet executed without a PR strategy, raises a critical question. If these leaders cannot manage a simple corporate power play, their competence to manage the far greater risks of artificial general intelligence is undermined.
The rhetoric around AI's existential risks is framed as a competitive tactic. Some labs used these narratives to scare investors, regulators, and potential competitors away, effectively 'pulling up the ladder' to cement their market lead under the guise of safety.
OpenAI previously had highly restrictive exit agreements that could claw back an employee's vested equity if they refused to sign a non-disparagement clause. This practice highlights how companies can use financial leverage to silence former employees, a tactic that became particularly significant during the CEO ousting controversy.
The "golden era" of big tech AI labs publishing open research is over. As firms realize the immense value of their proprietary models and talent, they are becoming as secretive as trading firms. The culture is shifting toward protecting IP, with top AI researchers even discussing non-competes, once a hallmark of finance.
OpenAI isn't just hiring talent; it's systematically poaching senior people from nearly every relevant Apple hardware department—camera, silicon, industrial design, manufacturing. This broad talent acquisition signals a serious, comprehensive strategy to build a fully integrated consumer device to rival Apple's own ecosystem.
After reportedly turning down a $1.5B offer from Meta to stay at his startup Thinking Machines, Andrew Tulloch was allegedly lured back with a $3.5B package. This demonstrates the hyper-inflated and rapidly escalating cost of acquiring top-tier AI talent, where even principled "missionaries" have a mercenary price.
The frenzied competition for the few thousand elite AI scientists has created a culture of constant job-hopping for higher pay, akin to a sports transfer season. This instability is slowing down major scientific progress, as significant breakthroughs require dedicated teams working together for extended periods, a rarity in the current environment.
The "Valinor" metaphor for top AI talent has evolved. It once meant leaving big labs for lucrative startups. Now, as talent returns to incumbents like OpenAI with massive pay packages, "Valinor" represents the safety and resources of the established players.