The speaker forecasts that 2026 will be the year public sentiment turns against artificial intelligence. This shift will move beyond policy debates to create social friction, where working in AI could attract negative personal judgment.
As AI assistants learn an individual's preferences, style, and context, their utility becomes deeply personalized. This creates a powerful lock-in effect, making users reluctant to switch to competing platforms, even if those platforms are technically superior.
Shane Legg, a pioneer in the field, maintains his original 2009 prediction that there is a 50/50 probability of achieving "minimal AGI" by 2028. He defines this as an AI agent capable of performing the cognitive tasks of a typical human.
The next phase of AI will involve autonomous agents communicating and transacting with each other online. This requires a strategic shift in marketing, sales, and e-commerce away from purely human-centric interaction models toward agent-to-agent commerce.
Demis Hassabis, CEO of Google DeepMind, warns that the societal transition to AGI will be immensely disruptive, happening at a scale and speed ten times greater than the Industrial Revolution. This suggests that historical parallels are inadequate for planning and preparation.
Startups and major labs are focusing on "world models," which simulate physical reality, cause, and effect. This is seen as the necessary step beyond text-based LLMs to create agents that can truly understand and interact with the physical world, a key step towards AGI.
Standardized benchmarks for AI models are largely irrelevant for business applications. Companies need to create their own evaluation systems tailored to their specific industry, workflows, and use cases to accurately assess which new model provides a tangible benefit and ROI.
As AI agents become reliable for complex, multi-step tasks, the critical human role will shift from execution to verification. New jobs will emerge focused on overseeing agent processes, analyzing their chain-of-thought, and validating their outputs for accuracy and quality.
Current AI models exhibit "jagged intelligence," performing at a PhD level on some tasks but failing at simple ones. Google DeepMind's CEO identifies this inconsistency and lack of reliability as a primary barrier to achieving true, general-purpose AGI.
Models like Gemini 3 Flash show a key trend: making frontier intelligence faster, cheaper, and more efficient. The trajectory is for today's state-of-the-art models to become 10x cheaper within a year, enabling widespread, low-latency, and on-device deployment.
Apple's seemingly slow AI progress is likely a strategic bet that today's powerful cloud-based models will become efficient enough to run locally on devices within 12 months. This would allow them to offer powerful AI with superior privacy, potentially leapfrogging competitors.
White House AI czar David Sachs used a Brookings report to claim AI job loss fears are exaggerated. The report's own author publicly clarified that while short-term impact is low, long-term disruption is underestimated, revealing a political motivation to downplay near-term job loss.
