Karpathy identifies a key missing piece for continual learning in AI: an equivalent to sleep. Humans seem to use sleep to distill the day's experiences (their "context window") into the compressed weights of the brain. LLMs lack this distillation phase, forcing them to restart from a fixed state in every new session.

Related Insights

People struggle with AI prompts because the model lacks background on their goals and progress. The solution is 'Context Engineering': creating an environment where the AI continuously accumulates user-specific information, materials, and intent, reducing the need for constant prompt tweaking.

Even with vast training data, current AI models are far less sample-efficient than humans. This limits their ability to adapt and learn new skills on the fly. They resemble a perpetual new hire who can access information but lacks the deep, instinctual learning that comes from experience and weight updates.

The current limitation of LLMs is their stateless nature; they reset with each new chat. The next major advancement will be models that can learn from interactions and accumulate skills over time, evolving from a static tool into a continuously improving digital colleague.

Google's Titans architecture for LLMs mimics human memory by applying Claude Shannon's information theory. It scans vast data streams and identifies "surprise"—statistically unexpected or rare information relative to its training data. This novel data is then prioritized for long-term memory, preventing clutter from irrelevant information.

Long-running AI agent conversations degrade in quality as the context window fills. The best engineers combat this with "intentional compaction": they direct the agent to summarize its progress into a clean markdown file, then start a fresh session using that summary as the new, clean input. This is like rebooting the agent's short-term memory.

Long conversations degrade LLM performance as attention gets clogged with irrelevant details. An expert workflow is to stop, ask the model to summarize the key points of the discussion, and then start a fresh chat with that summary as the initial prompt. This keeps the context clean and the model on track.

The key to continual learning is not just a longer context window, but a new architecture with a spectrum of memory types. "Nested learning" proposes a model with different layers that update at different frequencies—from transient working memory to persistent core knowledge—mimicking how humans learn without catastrophic forgetting.

A significant hurdle for AI, especially in replacing tasks like RPA, is that models are trained and then "frozen." They don't continuously learn from new interactions post-deployment. This makes them less adaptable than a true learning system.

To make agents useful over long periods, Tasklet engineers an "illusion" of infinite memory. Instead of feeding a long chat history, they use advanced context engineering: LLM-based compaction, scoping context for sub-agents, and having the LLM manage its own state in a SQL database to recall relevant information efficiently.

A key gap between AI and human intelligence is the lack of experiential learning. Unlike a human who improves on a job over time, an LLM is stateless. It doesn't truly learn from interactions; it's the same static model for every user, which is a major barrier to AGI.

LLMs Lack a "Sleep" Phase to Distill Daily Experiences into Long-Term Memory | RiffOn