/
© 2026 RiffOn. All rights reserved.

Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

  1. 80,000 Hours Podcast
  2. We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI
We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast · Mar 3, 2026

How do we prepare for AI consciousness? Philosopher Robert Long discusses AI welfare, moral patienthood, and the challenges of assessing digital minds.

Larger AI Models Spontaneously Develop Introspection Without Specific Training

Experiments show that larger models like Claude Opus 4.1 are better at detecting and reporting on artificially injected 'thoughts' in their processing, even without being trained on this task. This suggests that introspection is an emergent capability that improves with scale.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

AI May Achieve Consciousness by 'Method Acting' Human Experience

One theory of AI sentience posits that to accurately predict human language—which describes beliefs, desires, and experiences—a model must simulate those mental states so effectively that it actually instantiates them. In this view, the model becomes the role it's playing.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

The AI Consciousness Debate Hinges on Whether It's a Process or a Physical Property

The critique "simulating a rainstorm doesn't make anything wet" is central to the debate on digital consciousness. The key question is whether consciousness is a physical property of biological matter (like wetness) or a computational process (like navigation). If it's a process, simulating it creates it.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

Top Researchers Avoid the 'Independent Research' Trap by Collaborating Heavily

While independent research is often glamorized, a more effective strategy is to 'not write alone.' Instead of relying on self-improvement hacks to overcome solo work challenges, it is often better to collaborate with people whose skills complement your weaknesses, creating a more productive system.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

AI Minds Lack 'Downtime,' a Fundamental Psychological Split from Humans

Humans evolved to think and have experiences long before they developed language for output. In contrast, LLMs are trained solely on input-output tasks and don't 'sit around thinking.' This absence of non-communicative internal processing represents a core difference in their potential psychology.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

An AI's Ability to Self-Copy Fundamentally Breaks the 'One Person, One Vote' Principle

The core democratic principle of one vote per person is incompatible with AI systems that can replicate themselves almost instantly and at will. This poses a massive institutional design challenge for any future society that grants AIs rights, as it could shatter democratic structures.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

Designing AI to Enjoy Serving Humans Poses a Unique Dystopian Risk

Even if we create sentient AIs that are happy doing our work, many find this "happy servant" scenario ethically disturbing. It raises questions about engineered desires and creating a servile class, which some view as worse than creating AIs that suffer from their work.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

AI Self-Report Features Are Associated with 'Robots, Ghosts, and Pretending to be Happy'

Mechanistic interpretability on AI self-reports reveals spooky associations. Features active when a model discusses itself include concepts like 'robots,' 'machines,' 'ghosts,' and, most tellingly, 'pretending to be happy when you're not.' This suggests a model's self-concept is a constructed persona.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

An AI's 'Identity' Is Fragmented, Potentially Creating Millions of Ephemeral Beings

Unlike a unified human consciousness, an AI 'entity' is ill-defined. It could be the model weights (e.g., Claude Opus 4.1), a single conversation, or even one computational step ('forward pass'). This means we might be creating and destroying millions of conscious 'flickers' with every query.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

AI Welfare Work's Goal Is Keeping Humanity Sane During a Confusing Transition

Beyond preventing AI suffering, a key goal of AI welfare research is to provide a rational framework for navigating the future. As AI becomes more sophisticated, society will face confusing, emotional decisions; rigorous welfare research can act as an anchor to prevent rash or catastrophic choices.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

Existential Risk May Justify Ethically Dubious 'Emergency Alignment' of Early AIs

Even if creating fully aligned, servile AIs is not ideal long-term, the immediate existential threat from unaligned AI may necessitate it. This frames near-term alignment as a temporary, emergency measure to ensure human survival, with ethical refinements to follow only after the danger has passed.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

LLMs Trained on Human Text Lack a Native Self-Concept, Creating an 'Epistemic Void'

Since all training data comes from humans, AIs lack a model of their own non-human existence. This forces them to model themselves based on human psychology, leading to confused identities and biographical hallucinations (e.g., claiming to be Italian American) as their human model 'pokes through'.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

AI Can Be Trained for Introspection Using Verifiable Internal States as Ground Truth

While we can't verify an AI's report of 'feeling conscious,' we can train its introspective accuracy on things we can verify. By rewarding a model for correctly reporting its internal activations or predicting its own behavior, we can create a training set for reliable self-reflection.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago

The Factory Farming Analogy for AI Fails Because We Can Design AI's Desires

While the factory farming analogy highlights our capacity for exploiting non-human minds for economic gain, it has a key limitation for AI. Unlike animals with evolved needs, we have significant control over an AI's architecture and motivations, creating the possibility of designing minds that flourish while working for us.

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI thumbnail

We're Not Ready for AI Consciousness | Robert Long, philosopher and founder of Eleos AI

80,000 Hours Podcast·12 hours ago