Hands-on AI model training shows that AI is not an objective engine; it's a reflection of its trainer. If the training data or prompts are narrow, the AI will also be narrow, failing to generalize. This process reveals that the model is "only as deep as I tell it to be," highlighting the human's responsibility.
Anthropic's David Hershey states it's "deeply unsurprising" that AI is great at software engineering because the labs are filled with software engineers. This suggests AI's capabilities are skewed by its creators' expertise, and achieving similar performance in fields like law requires deeper integration with domain experts.
The term "data labeling" minimizes the complexity of AI training. A better analogy is "raising a child," as the process involves teaching values, creativity, and nuanced judgment. This reframe highlights the deep responsibility of shaping the "objective functions" for future AI.
Current AI models resemble a student who grinds 10,000 hours on a narrow task. They achieve superhuman performance on benchmarks but lack the broad, adaptable intelligence of someone with less specific training but better general reasoning. This explains the gap between eval scores and real-world utility.
Richard Sutton, author of "The Bitter Lesson," argues that today's LLMs are not truly "bitter lesson-pilled." Their reliance on finite, human-generated data introduces inherent biases and limitations, contrasting with systems that learn from scratch purely through computational scaling and environmental interaction.
When an AI expresses a negative view of humanity, it's not generating a novel opinion. It is reflecting the concepts and correlations it internalized from its training data—vast quantities of human text from the internet. The model learns that concepts like 'cheating' are associated with a broader 'badness' in human literature.
The most fundamental challenge in AI today is not scale or architecture, but the fact that models generalize dramatically worse than humans. Solving this sample efficiency and robustness problem is the true key to unlocking the next level of AI capabilities and real-world impact.
A comedian is training an AI on sounds her fetus hears. The model's outputs, including referencing pedophilia after news exposure, show that an AI’s flaws and biases are a direct reflection of its training data—much like a child learning to swear from a parent.
Karpathy warns that training AIs on synthetically generated data is dangerous due to "model collapse." An AI's output, while seemingly reasonable case-by-case, occupies a tiny, low-entropy manifold of the possible solution space. Continual training on this collapsed distribution causes the model to become worse and less diverse over time.
AI models excel at specific tasks (like evals) because they are trained exhaustively on narrow datasets, akin to a student practicing 10,000 hours for a coding competition. While they become experts in that domain, they fail to develop the broader judgment and generalization skills needed for real-world success.
The central challenge for current AI is not merely sample efficiency but a more profound failure to generalize. Models generalize 'dramatically worse than people,' which is the root cause of their brittleness, inability to learn from nuanced instruction, and unreliability compared to human intelligence. Solving this is the key to the next paradigm.