What we call "prediction" is just the recognition of recurring patterns from history. The future is genuinely unpredictable because the universe is inherently creative and open-ended. The future doesn't exist yet to be predicted; it must be constructed.
AI isn't an independent creation but an extension of Earth's evolutionary history. It's a complex structure that could only be produced by a long-standing living system, making it a "signature of life" rather than a separate, non-living entity.
The standard NASA definition of life as a "self-sustaining chemical system" is flawed. Modern humans are not individually self-sustaining; they rely on complex societal structures. This highlights the inadequacy of current definitions when faced with interdependent systems.
Conventional physics views the universe as evolving from initial conditions via fixed laws. An alternative view is that the universe is a self-constructing system with no external builder. Life is the physical process through which the universe explores possibilities and generates novelty.
Physicist Sara Walker proposes that time is a physical property inherent in objects. An object's "causal depth"—its construction history, measured by the assembly index—is its size in time. A human is "deeper" in time than a bacterium, which is deeper than a molecule.
Assembly theory bypasses ambiguous definitions of life by providing a quantifiable metric: the "assembly index." This measures an object's complex construction history. A high index, even in a molecule on Mars, would be strong evidence of life without directly seeing an organism.
When we say a system has "intention" or "goals," we use future-directed language. However, these properties are signatures of its past. The system was evolved and selected to have these traits because they worked historically. The "goal" is a record of past success, not a map of the future.
The complexity in LLMs isn't intelligence emerging in silicon; it reflects our own. These models are deep because they encode the vast, causally powerful structure of human language and culture. We are looking at a high-resolution imprint of our own collective mind.
Human brains are optimized to interpret social patterns, which was critical for survival. This social focus makes us inherently poor at perceiving objective physical reality directly. Individuals less sensitive to social cues might possess a cognitive architecture better suited for scientific inquiry.
Language works best when words act as pointers to physical objects, ensuring a shared understanding. As concepts become more abstract (e.g., 'consciousness'), they lose this grounding, making it difficult to confirm that two people using the same abstract word mean the same thing.
Simulating a system, like a fruit fly's brain, doesn't replicate its reality, only our observations of it. The universe itself generates physical structures that are too complex to be simulated within its own computational limits, showing the fallacy of equating simulation with reality.
Fearing AI will replace humans is like a single cell fearing the rise of multicellular organisms. While such evolutionary transitions render old forms obsolete, they enable new levels of complexity and create niches that were previously unimaginable. It's a natural, albeit disruptive, step in evolution.
