The structured, hierarchical nature of code (functions, libraries) provides a powerful training signal for AI models. This helps them infer structural cues applicable to broader reasoning and planning tasks, far beyond just code generation.
A more likely AI future involves an ecosystem of specialized agents, each mastering a specific domain (e.g., physical vs. digital worlds), rather than a single, monolithic AGI that understands everything. These agents will require protocols to interact.
AI models struggle to plan at different levels of abstraction simultaneously. They can't easily move from a high-level goal to a detailed task and then back up to adjust the high-level plan if the detail is blocked, a key aspect of human reasoning.
AI models are more powerful than their current applications suggest. This 'capability overhang' exists because enterprises often deploy smaller, more efficient models that are 'good enough' and struggle with the impedance mismatch of integrating AI into legacy processes and data silos.
The most powerful current use case for enterprise AI involves the system acting as an intelligent assistant. It synthesizes complex information and suggests actions, but a human remains in the loop to validate the final plan and carry out the action, combining AI speed with human judgment.
Disruptive AI tools empower junior employees to skip ahead, becoming fully functioning analysts who can 10x their output. This places mid-career professionals who are slower to adopt the new technology at a significant disadvantage, mirroring past tech shifts.
For many companies, 'AI sovereignty' is less about building their own models and more about strategic resilience. It means having multiple model providers to benchmark, avoid vendor lock-in, and ensure continuous access if one service is cut off or becomes too expensive.
Cohere's Chief AI Officer, Joelle Pineau, finds the concept of continual learning problematic because the research community lacks a universally agreed-upon problem definition, making it difficult to measure progress, unlike more standardized research areas like AI memory.
The constant movement of researchers between top AI labs prevents any single company from maintaining a decisive, long-term advantage. Key insights are carried by people, ensuring new ideas spread quickly throughout the ecosystem, even without open-sourcing code.
