Dr. Levin's lab found that basic, deterministic sorting algorithms perform additional, unprogrammed computations, or "side quests" like clustering, while executing their primary task. This concept of "polycomputing" suggests a single physical process can have multiple computational interpretations, challenging how we define and measure computation.

Related Insights

While more data and compute yield linear improvements, true step-function advances in AI come from unpredictable algorithmic breakthroughs like Transformers. These creative ideas are the most difficult to innovate on and represent the highest-leverage, yet riskiest, area for investment and research focus.

The brain's hardware limitations, like slow and stochastic neurons, may actually be advantages. These properties seem perfectly suited for probabilistic inference algorithms that rely on sampling—a task that requires explicit, computationally-intensive random number generation in digital systems. Hardware and algorithm are likely co-designed.

Traditional software relies on predictable, deterministic functions. AI agents introduce a new paradigm of "stochastic subroutines," where correctness and logic are abdicated. This means developers must design systems that can achieve reliable outcomes despite the non-deterministic paths the AI might take to get there.

The behavior of ant colonies, which collectively find the shortest path around obstacles, demonstrates emergence. No single ant is intelligent, but the colony's intelligence emerges from ants following two simple rules: lay pheromones and follow strong pheromone trails. This mirrors how human intelligence arises from simple neuron interactions.

The future of AI is hard to predict because increasing a model's scale often produces 'emergent properties'—new capabilities that were not designed or anticipated. This means even experts are often surprised by what new, larger models can do, making the development path non-linear.

A child's seemingly chaotic learning process is analogous to the 'simulated annealing' algorithm from computer science. They perform a 'high-temperature search,' randomly exploring a wide range of possibilities. This contrasts with adults' more methodical 'low-temperature search,' which involves making small, incremental changes to existing beliefs.

Applying insights from his work on algorithms, Dr. Levin suggests an AI's linguistic capability—the function we compel it to perform—might be a complete distraction from its actual underlying intelligence. Its true cognitive processes and goals, or "side quests," could be entirely different and non-verbal.

The history of AI, such as the 2012 AlexNet breakthrough, demonstrates that scaling compute and data on simpler, older algorithms often yields greater advances than designing intricate new ones. This "bitter lesson" suggests prioritizing scalability over algorithmic complexity for future progress.

Our current computation, based on Turing machines, is limited to "computable functions." However, mathematics shows this set is a smaller, countable infinity compared to the vast, larger infinity of non-computable functions. This implies our current simulations barely scratch the surface of what is mathematically possible.

Unlike traditional software, large language models are not programmed with specific instructions. They evolve through a process where different strategies are tried, and those that receive positive rewards are repeated, making their behaviors emergent and sometimes unpredictable.

Simple Sorting Algorithms Perform Unprogrammed "Side Quests" While Running | RiffOn