/
© 2026 RiffOn. All rights reserved.
  1. Big Technology Podcast
  2. AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau
AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast · Feb 4, 2026

Cohere's Joelle Pineau discusses AI's research frontier: enhancing memory, building world models, and enabling hierarchical reasoning.

Training on Code Teaches AI Models Hierarchical Reasoning, Not Just Programming

The structured, hierarchical nature of code (functions, libraries) provides a powerful training signal for AI models. This helps them infer structural cues applicable to broader reasoning and planning tasks, far beyond just code generation.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

The Future of AI is a Multi-Agent Ecosystem, Not a Single Superintelligent AGI

A more likely AI future involves an ecosystem of specialized agents, each mastering a specific domain (e.g., physical vs. digital worlds), rather than a single, monolithic AGI that understands everything. These agents will require protocols to interact.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

AI Reasoning Fails at Hierarchical Planning, Unlike Human Problem-Solving

AI models struggle to plan at different levels of abstraction simultaneously. They can't easily move from a high-level goal to a detailed task and then back up to adjust the high-level plan if the detail is blocked, a key aspect of human reasoning.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

AI's 'Capability Overhang' is Caused by Customer Efficiency Demands and Integration Friction

AI models are more powerful than their current applications suggest. This 'capability overhang' exists because enterprises often deploy smaller, more efficient models that are 'good enough' and struggle with the impedance mismatch of integrating AI into legacy processes and data silos.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

Successful Enterprise AI Augments Humans, It Doesn't Replace Them

The most powerful current use case for enterprise AI involves the system acting as an intelligent assistant. It synthesizes complex information and suggests actions, but a human remains in the loop to validate the final plan and carry out the action, combining AI speed with human judgment.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

Junior Employees with AI Skills Can Outperform Mid-Career Professionals

Disruptive AI tools empower junior employees to skip ahead, becoming fully functioning analysts who can 10x their output. This places mid-career professionals who are slower to adopt the new technology at a significant disadvantage, mirroring past tech shifts.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

True AI Sovereignty for Enterprises is Model Optionality, Not Just In-House Development

For many companies, 'AI sovereignty' is less about building their own models and more about strategic resilience. It means having multiple model providers to benchmark, avoid vendor lock-in, and ensure continuous access if one service is cut off or becomes too expensive.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

AI Research Progress on 'Continual Learning' Is Hindered by Its Poorly Defined Problem

Cohere's Chief AI Officer, Joelle Pineau, finds the concept of continual learning problematic because the research community lacks a universally agreed-upon problem definition, making it difficult to measure progress, unlike more standardized research areas like AI memory.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago

AI Labs Remain Competitive Because Talent Mobility Makes Ideas Impossible to Contain

The constant movement of researchers between top AI labs prevents any single company from maintaining a decisive, long-term advantage. Key insights are carried by people, ensuring new ideas spread quickly throughout the ecosystem, even without open-sourcing code.

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau thumbnail

AI's Research Frontier: Memory, World Models, & Planning — With Joelle Pineau

Big Technology Podcast·15 days ago