The behavior of ant colonies, which collectively find the shortest path around obstacles, demonstrates emergence. No single ant is intelligent, but the colony's intelligence emerges from ants following two simple rules: lay pheromones and follow strong pheromone trails. This mirrors how human intelligence arises from simple neuron interactions.

Related Insights

Multi-agent systems work well for easily parallelizable, "read-only" tasks like research, where sub-agents gather context independently. They are much trickier for "write" tasks like coding, where conflicting decisions between agents create integration problems.

Our perception of sensing then reacting is an illusion. The brain constantly predicts the next moment based on past experiences, preparing actions before sensory information fully arrives. This predictive process is far more efficient than constantly reacting to the world from scratch, meaning we act first, then sense.

Across three billion years and four stages of mind (molecule, neuron, network, community), intelligence has consistently advanced by diversifying its thinking elements. The most powerful minds at each stage are those with the greatest variety of components. This frames diversity as a fundamental, time-tested strategy for improving competence in any system, including organizations.

Traditional corporate structures are too rigid for today's environment. The octopus serves as a better model, with distributed intelligence in its tentacles allowing for autonomous yet coordinated action, sensory awareness of customers, and rapid adaptation.

A useful mental model for AGI is child development. Just as a child can be left unsupervised for progressively longer periods, AI agents are seeing their autonomous runtimes increase. AGI arrives when it becomes economically profitable to let an AI work continuously without supervision, much like an independent adult.

An advanced AI will likely be sentient. Therefore, it may be easier to align it to a general principle of caring for all sentient life—a group to which it belongs—rather than the narrower, more alien concept of caring only for humanity. This leverages a potential for emergent, self-inclusive empathy.

Afeyan proposes that AI's emergence forces us to broaden our definition of intelligence beyond humans. By viewing nature—from cells to ecosystems—as intelligent systems capable of adaptation and anticipation, we can move beyond reductionist biology to unlock profound new understandings of disease.

Human intelligence is multifaceted. While LLMs excel at linguistic intelligence, they lack spatial intelligence—the ability to understand, reason, and interact within a 3D world. This capability, crucial for tasks from robotics to scientific discovery, is the focus for the next wave of AI models.

To build robust social intelligence, AIs cannot be trained solely on positive examples of cooperation. Like pre-training an LLM on all of language, social AIs must be trained on the full manifold of game-theoretic situations—cooperation, competition, team formation, betrayal. This builds a foundational, generalizable model of social theory of mind.

Biological intelligence has no OS or APIs; the physics of the brain *is* the computation. Unconventional AI's CEO Naveen Rao argues that current AI is inefficient because it runs on layers of abstraction. The future is hardware where intelligence is an emergent property of the system's physics.

Complex Group Intelligence Arises from Simple, Individual Rules, Not Central Command | RiffOn