The US government labeled Anthropic a supply chain risk, threatening revenue. While Anthropic will likely win the legal case due to government overreach, the ambiguity and fear created by the designation can be weaponized by competitors and deter B2B customers, causing significant business damage regardless of the legal outcome.
The immense cost of AI compute is being offset by a strategic shift: eliminating junior-level positions across tech, sales, and support. This "death of the junior" trend frees up budget for data centers but risks creating a severe talent gap in the coming years as the pipeline of experienced mid-level professionals dries up.
The market's tolerance for mature SaaS companies managing a slow, predictable decline in growth has ended. Now, credibility and valuation premiums are only awarded to companies that demonstrate re-acceleration. This puts immense pressure on incumbents, where even a successful new AI product might not be enough to outrun a declining core business.
The success of new AI startups is driven by a desire among managers to replace human-led processes with autonomous agents. Customers don't want AI to make their teams slightly better; they want an agent that eliminates the need for the team entirely. This is a demand most incumbent software companies misunderstand and fail to serve.
History shows that social stability is threatened not by the long-suffering poor, but by a disgruntled, overeducated middle class. AI's displacement of junior roles in tech and law creates a cohort of indebted graduates who played by the rules but now face unemployment. This group is far more likely to cause political and social unrest.
Even a design leader like Figma is struggling with AI, releasing a subpar product. This highlights a critical failure point for incumbents: their traditional, planned-out quarterly release cycles are no match for the rapid, continuous deployment model of AI-native startups. A "best effort" approach to shipping AI is now a recipe for failure.
Many high-growth AI B2B companies face a hidden bottleneck: a shortage of Forward Deployed Engineers (FDEs) who can get customers implemented and running. Despite huge demand, growth is limited by the number of these skilled professionals. This forces them to operate like services businesses, where hiring and training FDEs is the primary constraint.
AI's potential for rapid growth is creating a new moral calculus. Practices like tracking every employee keystroke for CRM automation, once controversial, are becoming standard. This trend suggests that as companies chase exponential gains, they will increasingly justify and normalize actions, from mass layoffs to invasive monitoring, that were previously considered unacceptable.
Discussions on AI's future often miss the point by arguing on different planes. Technologists describe an infinite number of problems AI *can* solve. Economists, however, question if these solutions are worth the cost, pointing out the current capex spend equates to thousands of dollars per US worker—a questionable ROI for many roles.
Anthropic's new code review feature, priced at $20, sparked backlash for being "too expensive," despite automating work that would take a human developer hours. This reaction demonstrates a fundamental misunderstanding of AI economics. Users have been conditioned by subsidized products to expect powerful, computationally intensive features for free, a model that is unsustainable.
The current AI data center arms race isn't about meeting today's demand for chatbots. It's fueled by companies like Meta betting on a future where personal AI agents run constantly, analyzing every interaction. This vision of persistent, parallel agents requires an exponential increase in compute, explaining why they will buy any available capacity.
