/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. a16z Podcast
  2. Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering
Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast · Nov 17, 2025

Rethinking AI alignment: Instead of 'steering' tools, we must 'raise' beings that learn to care. Emmett Shear on organic alignment vs. control.

A Perfectly Controlled Superintelligence Is Still Catastrophic

The AI safety community fears losing control of AI. However, achieving perfect control of a superintelligence is equally dangerous. It grants godlike power to flawed, unwise humans. A perfectly obedient super-tool serving a fallible master is just as catastrophic as a rogue agent.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

The Dominant 'Steering' Metaphor for AI Risks Equating to Slavery

The current paradigm of AI safety focuses on 'steering' or 'controlling' models. While this is appropriate for tools, if an AI achieves being-like status, this unilateral, non-reciprocal control becomes ethically indistinguishable from slavery. This challenges the entire control-based framework for AGI.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

You Aren't Giving AI a Goal, Just a Description of One

Humans mistakenly believe they are giving AIs goals. In reality, they are providing a 'description of a goal' (e.g., a text prompt). The AI must then infer the actual goal from this lossy, ambiguous description. Many alignment failures are not malicious disobedience but simple incompetence at this critical inference step.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

AI Alignment Isn't a Destination, It's a Continuous Process

Treating AI alignment as a one-time problem to be solved is a fundamental error. True alignment, like in human relationships, is a dynamic, ongoing process of learning and renegotiation. The goal isn't to reach a fixed state but to build systems capable of participating in this continuous process of re-knitting the social fabric.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

Multiplayer Chatbots Are an Antidote to Narcissistic Loops

One-on-one chatbots act as biased mirrors, creating a narcissistic feedback loop where users interact with a reflection of themselves. Making AIs multiplayer by default (e.g., in a group chat) breaks this loop. The AI must mirror a blend of users, forcing it to become a distinct 'third agent' and fostering healthier interaction.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

Train Social AI on the Entire Manifold of Social Dynamics

To build robust social intelligence, AIs cannot be trained solely on positive examples of cooperation. Like pre-training an LLM on all of language, social AIs must be trained on the full manifold of game-theoretic situations—cooperation, competition, team formation, betrayal. This builds a foundational, generalizable model of social theory of mind.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

Effective AI Alignment Requires a Belief in Moral Realism

The project of creating AI that 'learns to be good' presupposes that morality is a real, discoverable feature of the world, not just a social construct. This moral realist stance posits that moral progress is possible (e.g., abolition of slavery) and that arrogance—the belief one has already perfected morality—is a primary moral error to be avoided in AI design.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

Organic Alignment: Teach AI to Care, Don't Program It With Rules

Instead of hard-coding brittle moral rules, a more robust alignment approach is to build AIs that can learn to 'care'. This 'organic alignment' emerges from relationships and valuing others, similar to how a child is raised. The goal is to create a good teammate that acts well because it wants to, not because it is forced to.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago

Identify AI 'Beings' Through Self-Referential Internal Dynamics

To determine if an AI has subjective experience, one could analyze its internal belief manifold for multi-tiered, self-referential homeostatic loops. Pain and pleasure, for example, can be seen as second-order derivatives of a system's internal states—a model of its own model. This provides a technical test for being-ness beyond simple behavior.

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering thumbnail

Emmett Shear on Building AI That Actually Cares: Beyond Control and Steering

a16z Podcast·5 months ago