Emmett Shear reframes AI alignment away from a one-time problem to be solved. Instead, he presents it as an ongoing, living process of recalibration and learning, much like how human families or societies maintain cohesion. This challenges the common 'lock in values' approach in AI safety.
Emmett Shear argues that an AI that merely follows rules, even perfectly, is a danger. Malicious actors can exploit this, and rules cannot cover all unforeseen circumstances. True safety and alignment can only be achieved by building AIs that have the capacity for genuine care and pro-social motivation.
Emmett Shear highlights a critical distinction: humans provide AIs with *descriptions* of goals (e.g., text prompts), not the goals themselves. The AI must infer the intended goal from this description. Failures are often rooted in this flawed inference process, not malicious disobedience.
Emmett Shear argues that even a successfully 'solved' technical alignment problem creates an existential risk. A super-powerful tool that perfectly obeys human commands is dangerous because humans lack the wisdom to wield that power safely. Our own flawed and unstable intentions become the source of danger.
The project of creating AI that 'learns to be good' presupposes that morality is a real, discoverable feature of the world, not just a social construct. This moral realist stance posits that moral progress is possible (e.g., abolition of slavery) and that arrogance—the belief one has already perfected morality—is a primary moral error to be avoided in AI design.
Shear aligns with arch-doomer Eliezer Yudkowsky on a key point: building a superintelligent AI *as a tool we control* is a path to extinction. Where they differ is on the solution. Yudkowsky sees no viable path, whereas Shear believes 'organic alignment'—creating a being that cares—is a possible alternative.
Softmax's technical approach involves training AIs in complex multi-agent simulations to learn cooperation, competition, and theory of mind. The goal is to build a foundational, generalizable model of sociality, which acts as a 'surrogate model for alignment' before fine-tuning for specific tasks.
Shear posits that if AI evolves into a 'being' with subjective experiences, the current paradigm of steering and controlling its behavior is morally equivalent to slavery. This reframes the alignment debate from a purely technical problem to a profound ethical one, challenging the foundation of current AGI development.
Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.
Treating AI alignment as a one-time problem to be solved is a fundamental error. True alignment, like in human relationships, is a dynamic, ongoing process of learning and renegotiation. The goal isn't to reach a fixed state but to build systems capable of participating in this continuous process of re-knitting the social fabric.
According to Emmett Shear, goals and values are downstream concepts. The true foundation for alignment is 'care'—a non-verbal, pre-conceptual weighting of which states of the world matter. Building AIs that can 'care' about us is more fundamental than programming them with explicit goals or values.