After being hacked in 2012, Google reinvented its internal security to operate under the assumption that some employees are compromised. This decade-old infrastructure is now a significant strategic advantage for Google DeepMind, as it's perfectly architected to manage powerful AI agents which pose a similar "insider threat" risk.
Anthropic CFO Krishna Rao's role extends far beyond traditional finance, focusing on securing the company's lifeblood: compute. He personally spearheads massive deals with Google, Broadcom, and Microsoft for TPUs and servers. This redefines the CFO role at an AI leader, where strategic compute acquisition is as crucial as financial planning or fundraising.
In contrast to OpenAI's larger deals, Anthropic's M&A strategy is to write smaller checks (under $500 million) for companies with exceptional talent or promising technology. The acquisition of biotech startup Coefficient Bio exemplifies this approach: using targeted M&A to acquire specialized teams that can help them expand into new verticals like drug discovery.
Google's new AI coding "Strike Team," with personal involvement from Sergey Brin, is focused on improving its models for internal Google engineers first. The goal is to create a feedback loop where AI helps build better AI, a concept Brin calls "AI takeoff," treating any friction in this process as a top-priority blocker for achieving AGI.
Despite operating in the same hot prediction market space, Polymarket is raising at a $15B valuation, well below competitor Calci's $22B. The key reason is revenue generation; Calci always charged fees and hit $1.5B in annualized revenue, whereas Polymarket only recently began monetizing, demonstrating the steep valuation cost of a delayed revenue model.
The primary driver for major AI labs building out "AI control" teams isn't long-term existential risk, but the immediate commercial threat of AI agents causing accidental harm. Companies are worried about agents deleting production databases or leaking sensitive IP, making AI control a necessary security measure for deploying these powerful but unpredictable products.
The fundamental behavioral differences between models—like OpenAI's talkative GPT versus Anthropic's action-oriented Claude—force entirely different safety approaches. OpenAI's control systems can analyze a model's stated reasoning before it acts, while Anthropic must focus on detecting bad actions after they occur, showing how model traits shape security infrastructure.
