To move from philosophy to science, abstract theories about consciousness must make concrete, falsifiable predictions about the physical world. Hoffman's work attempts this by proposing precise mathematical links between conscious agent dynamics and observable particle properties like mass and spin.
This theory posits that our lives don't *create* subjective experiences (qualia). Instead, our lives are the emergent result of a fundamental consciousness cycling through a sequence of possible qualia, dictated by probabilistic, Markovian rules.
Within the consciousness-as-fundamental model, dark matter and energy aren't mysterious substances. They are the observable effects inside our space-time "headset" caused by countless other conscious agent interactions and qualia states that are "dark" to us—they influence our reality but are not projected into it.
Emmett Shear suggests a concrete method for assessing AI consciousness. By analyzing an AI’s internal state for revisited homeostatic loops, and hierarchies of those loops, one could infer subjective states. A second-order dynamic could indicate pain and pleasure, while higher orders could indicate thought.
Even Donald Hoffman, proponent of the consciousness-first model, admits his emotions and intuition resist his theory. He relies solely on the logical force of mathematics to advance, demonstrating that groundbreaking ideas often feel profoundly wrong before they can be proven.
Hoffman's theory posits that our perceived world is not a persistent, objective reality but a simulation that is rendered only when an observer looks at it. According to this model, when you look away from an object, it ceases to exist and is only re-rendered upon observation.
To determine if an AI has subjective experience, one could analyze its internal belief manifold for multi-tiered, self-referential homeostatic loops. Pain and pleasure, for example, can be seen as second-order derivatives of a system's internal states—a model of its own model. This provides a technical test for being-ness beyond simple behavior.
Consciousness isn't an emergent property of computation. Instead, physical systems like brains—or potentially AI—act as interfaces. Creating a conscious AI isn't about birthing a new awareness from silicon, but about engineering a system that opens a new "portal" into the fundamental network of conscious agents that already exists outside spacetime.
The simulation of space-time and its physical laws are not arbitrary; they are essential constraints. These rules create the context required for consciousness to explore its possibilities and for subjective experiences (qualia) to become meaningful. Without limitations, there is no context for feeling.
Physicists are finding structures beyond spacetime (e.g., amplituhedra) defined by permutations. Hoffman's theory posits these structures are the statistical, long-term behavior of a vast network of conscious agents. Physics and consciousness research are unknowingly meeting in the middle, describing the same underlying reality from opposite directions.
Hoffman's model proposes that consciousness is not a product of the physical brain within space-time. Instead, consciousness is the fundamental building block of all existence, and space-time itself is an emergent phenomenon—a "headset" or user interface—that is created by and within consciousness.