AI doesn't store data like a traditional database; it learns patterns and relationships, effectively compressing vast amounts of repetitive information. This is why a model trained on the entire internet can fit on a USB stick—it captures the essence and variations of concepts, not every single instance.

Related Insights

The sudden arrival of powerful AI like GPT-3 was a non-repeatable event: training on the entire internet and all existing books. With this data now fully "eaten," future advancements will feel more incremental, relying on the slower process of generating new, high-quality expert data.

The significance of a massive context window isn't just about processing more data. It enables AI to identify and synthesize relationships across thousands of pages of disparate information, revealing insights and maintaining consistency in a way that's impossible with a piecemeal approach.

When designing smaller models, it's inefficient to use limited parameters for memorizing facts that can be looked up. Jeff Dean advocates for focusing a model's capacity on core reasoning abilities and pairing it with a retrieval system. This makes the model more generally useful, as it can access a vast external knowledge base when needed.

The common metaphor of AI as an artificial being is wrong. It's better understood as a 'cultural technology,' like print or libraries. Its function is to aggregate, summarize, and transmit existing human knowledge at scale, not to create new, independent understanding of the world.

Language models work by identifying subtle, implicit patterns in human language that even linguists cannot fully articulate. Their success broadens our definition of "knowledge" to include systems that can embody and use information without the explicit, symbolic understanding that humans traditionally require.

Google's Titans architecture for LLMs mimics human memory by applying Claude Shannon's information theory. It scans vast data streams and identifies "surprise"—statistically unexpected or rare information relative to its training data. This novel data is then prioritized for long-term memory, preventing clutter from irrelevant information.

Models like Stable Diffusion achieve massive compression ratios (e.g., 50,000-to-1) because they aren't just storing data; they are learning the underlying principles and concepts. The resulting model is a compact 'filter' of intelligence that can generate novel outputs based on these learned principles.

The 2017 introduction of "transformers" revolutionized AI. Instead of being trained on the specific meaning of each word, models began learning the contextual relationships between words. This allowed AI to predict the next word in a sequence without needing a formal dictionary, leading to more generalist capabilities.

The "memory" feature in today's LLMs is a convenience that saves users from re-pasting context. It is far from human memory, which abstracts concepts and builds pattern recognition. The true unlock will be when AI develops intuitive judgment from past "experiences" and data, a much longer-term challenge.

Unlike traditional software, large language models are not programmed with specific instructions. They evolve through a process where different strategies are tried, and those that receive positive rewards are repeated, making their behaviors emergent and sometimes unpredictable.