Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

Ideas now considered moral common sense, like abolition or women's rights, were once viewed as laughable and promoted by 'weirdos.' This historical precedent justifies seriously exploring today's seemingly bizarre ethical arguments, such as shrimp welfare or digital consciousness, as they could represent future moral progress we are currently blind to.

Related Insights

The AI safety community acknowledges it lacks all the ideas needed to ensure a safe transition to AGI. This creates an imperative to fund 'neglected approaches'—unconventional, creative, and sometimes 'weird' research that falls outside the current mainstream paradigms but may hold the key to novel solutions.

Every major innovation, from the bicycle ('bicycle face') to the internet, has been met with a 'moral panic'—a widespread fear that it will ruin society. Recognizing this as a historical pattern allows innovators to anticipate and navigate the inevitable backlash against their work.

In the 1960s, Jane Goodall was criticized by scientists for naming chimpanzees and describing their emotions. These very methods, however, were crucial in overthrowing the dogma that personality, thought, and feeling were uniquely human traits, transforming the field of ethology.

The core value of the Effective Altruism (EA) community may be its function as an 'engine' for incubating important but non-prestigious, speculative cause areas like AI safety or digital sentience. It provides a community and methodology for tackling problems when the methodology isn't firm and the work is too unconventional for mainstream institutions.

The fact that slavery abolition was a highly contingent event demonstrates that moral progress isn't automatic. This shouldn't be seen as depressing, but empowering. It proves that positive change is the direct result of deliberate human choices and collective action, not a passive trend. The world improves only because people actively work to make it better.

The argument that we shouldn't lock in our values to allow for future "moral progress" is flawed. We judge the past by our current values, so it always looks less moral. By that same token, any future moral drift will look like degradation from our present viewpoint. There is no objective upward trend to defer to.

There's a vast distance between knowing something is wrong and acting on it. Like modern people walking past the homeless or eating meat despite ethical concerns, societies for centuries possessed the moral insight that slavery was wrong but did nothing. Successful movements are the rare exception, not the norm.

Shear posits that if AI evolves into a 'being' with subjective experiences, the current paradigm of steering and controlling its behavior is morally equivalent to slavery. This reframes the alignment debate from a purely technical problem to a profound ethical one, challenging the foundation of current AGI development.

Anthropic published a 15,000-word "constitution" for its AI that includes a direct apology, treating it as a "moral patient" that might experience "costs." This indicates a philosophical shift in how leading AI labs consider the potential sentience and ethical treatment of their creations.

Drawing an analogy to *Westworld*, the argument is that cruelty toward entities that look and act human degrades our own humanity, regardless of the entity's actual consciousness. For our own moral health, we should treat advanced, embodied AIs with respect.