An LLM's core function is predicting the next word. Therefore, when it encounters information that defies its prediction, it flags it as surprising. This mechanism gives it an innate ability to identify "interesting" or novel concepts within a body of text.

Related Insights

A core debate in AI is whether LLMs, which are text prediction engines, can achieve true intelligence. Critics argue they cannot because they lack a model of the real world. This prevents them from making meaningful, context-aware predictions about future events—a limitation that more data alone may not solve.

LLMs predict the next token in a sequence. The brain's cortex may function as a general prediction engine capable of "omnidirectional inference"—predicting any missing information from any available subset of inputs, not just what comes next. This offers a more flexible and powerful form of reasoning.

In a 2018 interview, OpenAI's Greg Brockman described their foundational training method: ingesting thousands of books with the sole task of predicting the next word. This simple predictive objective was the key that unlocked complex, generalizable language understanding in their models.

When AI pioneers like Geoffrey Hinton see agency in an LLM, they are misinterpreting the output. What they are actually witnessing is a compressed, probabilistic reflection of the immense creativity and knowledge from all the humans who created its training data. It's an echo, not a mind.

The critique that LLMs lack true creativity because they only recombine and predict existing data is challenged by the observation that human creativity, particularly in branding and marketing, often operates on the exact same principles. The process involves combining existing concepts in novel ways to feel fresh, much like an LLM.

Early Wittgenstein's "logical space of possibilities" mirrors how LLM embeddings map words into a high-dimensional space. Late Wittgenstein's "language games" explain their core function: next-token prediction and learning through interactive feedback (RLHF), where meaning is derived from use and context.

Google's Titans architecture for LLMs mimics human memory by applying Claude Shannon's information theory. It scans vast data streams and identifies "surprise"—statistically unexpected or rare information relative to its training data. This novel data is then prioritized for long-term memory, preventing clutter from irrelevant information.

The 'attention' mechanism in AI has roots in 1990s robotics. Dr. Wallace built a robotic eye with high resolution at its center and lower resolution in the periphery. The system detected 'interesting' data (e.g., movement) in the periphery and rapidly shifted its high-resolution gaze—its 'attention'—to that point, a physical analog to how LLMs weigh words.

Unlike traditional software, large language models are not programmed with specific instructions. They evolve through a process where different strategies are tried, and those that receive positive rewards are repeated, making their behaviors emergent and sometimes unpredictable.

Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.