The adoption of powerful AI agents will fundamentally shift knowledge work. Instead of executing tasks, humans will be responsible for directing agents, providing crucial context, managing escalations, and coordinating between different AI systems. The primary job will evolve from 'doing' to 'managing and guiding'.
After the failure of ambitious devices like the Humane AI Pin, a new generation of AI wearables is finding a foothold by focusing on a single, practical use case: AI-powered audio recording and transcription. This refined focus on a proven need increases their chances of survival and adoption.
A Chinese hospital's AI program is achieving early success not just by detecting cancer, but by screening asymptomatic patients' routine CT scans taken for unrelated issues. This unlocks a powerful and safe method for widespread early screening of dangerous cancers like pancreatic, which was previously unfeasible.
The effectiveness of enterprise AI agents is limited not by data access, but by the absence of context for *why* decisions were made. 'Context graphs' aim to solve this by capturing 'decision traces'—exceptions, precedents, and overrides that currently live in Slack threads and employee's heads, creating a true source of truth for automation.
Rather than programming AI agents with a company's formal policies, a more powerful approach is to let them observe thousands of actual 'decision traces.' This allows the AI to discover the organization's emergent, de facto rules—how work *actually* gets done—creating a more accurate and effective world model for automation.
Turing Award winner Jan LeCun's departure from Meta and public criticism of its 'LLM-pilled' strategy is more than corporate drama. It represents a vital, oppositional viewpoint arguing for 'world models' over scaling LLMs. This intellectual friction is crucial for preventing stagnation and advancing the entire field of AI.
