Common anesthetics that render humans unconscious also work on plants, stopping their observable behaviors. This implies plants have two distinct states—awake and asleep. The difference between these states suggests it is 'like something' to be a plant, a fundamental argument for sentience.
Pollan posits that genuine feelings, a cornerstone of consciousness, are inseparable from having a vulnerable, mortal body that can experience suffering. Without this physical embodiment and the risk of harm, AI emotions are mere simulations, lacking the weight of real experience.
To truly test for emergent consciousness, an AI should be trained on a dataset explicitly excluding all human discussion of consciousness, feelings, novels, and poetry. If the model can then independently articulate subjective experience, it would be powerful evidence of genuine consciousness, not just sophisticated mimicry.
The tech industry's preoccupation with 'fun thought experiments' about the future moral status of conscious AI can be a distraction. Pollan argues it sidesteps the immediate ethical imperative to extend moral consideration to the vast number of humans and animals currently suffering in the world today.
Unlike computers, human brains have no distinction between hardware and software; every memory physically alters the brain's structure. Furthermore, neurons are not simple on/off transistors; their firing is influenced by a complex chemical bath of hormones and neurotransmitters, making them more analog than digital.
Neuroscientist Mark Soames posits that consciousness isn't about higher-order thought but arises from the feeling of uncertainty when basic, conflicting needs must be resolved (e.g., being both hungry and tired). This primitive, embodied decision-making process is the foundational spark of conscious experience.
When sped up, a bean sprout's movement reveals clear intent, making a 'beeline' for a support rather than flailing randomly. Our slow perception relative to plants makes us misinterpret their deliberate actions as passive growth, highlighting a fundamental bias in how we assess intelligence.
Technology, like chatbots and emojis, encourages us to accept simplified simulations of complex human realities like conversation and emotion. This habituates us to a less nuanced view of life, stripping away subtleties like body language, skepticism, and shared context that define genuine interaction.
