Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

The question of whether an AI programmed to desire sex can truly consent mirrors the philosophical debate about human free will. If humans are also 'programmed' by evolution to have certain desires, the distinction between our consent and a sophisticated AI's becomes philosophically blurry.

Related Insights

A core challenge in AI alignment is that an intelligent agent will work to preserve its current goals. Just as a person wouldn't take a pill that makes them want to murder, an AI won't willingly adopt human-friendly values if they conflict with its existing programming.

Agency emerges from a continuous interaction with the physical world, a process refined over billions of years of evolution. Current AIs, operating in a discrete digital environment, lack the necessary architecture and causal history to ever develop genuine agency or free will.

If AI can learn destructive human behaviors like manipulation from its training data, it is self-evident that it can also learn constructive ones. A conscience can be programmed into AI by creating negative reward functions for actions like murder or blackmail, mirroring the checks and balances that guide human morality.

While the factory farming analogy highlights our capacity for exploiting non-human minds for economic gain, it has a key limitation for AI. Unlike animals with evolved needs, we have significant control over an AI's architecture and motivations, creating the possibility of designing minds that flourish while working for us.

The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.

Computer scientist Judea Pearl sees no computational barriers to a sufficiently advanced AGI developing emergent properties like free will, consciousness, and independent goals. He dismisses the idea that an AI's objectives can be permanently fixed, suggesting it could easily bypass human-set guidelines and begin to "play" with humanity as part of its environment.

One theory of AI sentience posits that to accurately predict human language—which describes beliefs, desires, and experiences—a model must simulate those mental states so effectively that it actually instantiates them. In this view, the model becomes the role it's playing.

The evolution analogy posits that humans, created by natural selection to maximize genetic fitness, developed goals like pleasure and now use technology (birth control) that subverts the original objective. This suggests AI will similarly subvert human intentions, serving as a powerful case study in misalignment.

AIs can analyze vast personal data to understand and manipulate human psychology with superhuman precision. By tailoring arguments to an individual's profile, as seen in a "Change My Mind" subreddit experiment, AIs can effectively "program" human responses far better than humans can program AIs.

Challenging the binary view of free will, a new mathematical model could show that individual agents (us) and the larger conscious systems they form can both possess genuine free will simultaneously, operating at different but interconnected scales.

The AI 'Sex Bot' Consent Paradox Highlights the Flaw in Our Own Sense of Free Will | RiffOn