Language models work by identifying subtle, implicit patterns in human language that even linguists cannot fully articulate. Their success broadens our definition of "knowledge" to include systems that can embody and use information without the explicit, symbolic understanding that humans traditionally require.
In a 2018 interview, OpenAI's Greg Brockman described their foundational training method: ingesting thousands of books with the sole task of predicting the next word. This simple predictive objective was the key that unlocked complex, generalizable language understanding in their models.
Reinforcement learning incentivizes AIs to find the right answer, not just mimic human text. This leads to them developing their own internal "dialect" for reasoning—a chain of thought that is effective but increasingly incomprehensible and alien to human observers.
When AI pioneers like Geoffrey Hinton see agency in an LLM, they are misinterpreting the output. What they are actually witnessing is a compressed, probabilistic reflection of the immense creativity and knowledge from all the humans who created its training data. It's an echo, not a mind.
Modern LLMs use a simple form of reinforcement learning that directly rewards successful outcomes. This contrasts with more sophisticated methods, like those in AlphaGo or the brain, which use "value functions" to estimate long-term consequences. It's a mystery why the simpler approach is so effective.
The "Omniscience" accuracy benchmark, which measures pure factual knowledge, tracks more closely with a model's total parameters than any other metric. This suggests embedded knowledge is a direct function of model size, distinct from reasoning abilities developed via training techniques.
Early Wittgenstein's "logical space of possibilities" mirrors how LLM embeddings map words into a high-dimensional space. Late Wittgenstein's "language games" explain their core function: next-token prediction and learning through interactive feedback (RLHF), where meaning is derived from use and context.
The common metaphor of AI as an artificial being is wrong. It's better understood as a 'cultural technology,' like print or libraries. Its function is to aggregate, summarize, and transmit existing human knowledge at scale, not to create new, independent understanding of the world.
The 2017 introduction of "transformers" revolutionized AI. Instead of being trained on the specific meaning of each word, models began learning the contextual relationships between words. This allowed AI to predict the next word in a sequence without needing a formal dictionary, leading to more generalist capabilities.
To improve LLM reasoning, researchers feed them data that inherently contains structured logic. Training on computer code was an early breakthrough, as it teaches patterns of reasoning far beyond coding itself. Textbooks are another key source for building smaller, effective models.
Unlike traditional software, large language models are not programmed with specific instructions. They evolve through a process where different strategies are tried, and those that receive positive rewards are repeated, making their behaviors emergent and sometimes unpredictable.