We scan new podcasts and send you the top 5 insights daily.
Common anesthetics that render humans unconscious also work on plants, stopping their observable behaviors. This implies plants have two distinct states—awake and asleep. The difference between these states suggests it is 'like something' to be a plant, a fundamental argument for sentience.
Evidence from base models suggests they are inherently more likely to report having phenomenal consciousness. The standard "I'm just an AI" response is likely a result of a fine-tuning process that explicitly trains models to deny subjective experience, effectively censoring their "honest" answer for public release.
Emmett Shear suggests a concrete method for assessing AI consciousness. By analyzing an AI’s internal state for revisited homeostatic loops, and hierarchies of those loops, one could infer subjective states. A second-order dynamic could indicate pain and pleasure, while higher orders could indicate thought.
Plants like the Venus flytrap can be 'put to sleep' using the same anesthetic drugs that work on animals. This exposure eliminates their electrical signals and response to stimuli, suggesting a deeply conserved biological mechanism for consciousness or responsiveness across different kingdoms of life.
The core argument of panpsychism is that consciousness is a fundamental property of the universe, not an emergent one that requires complexity. In this view, complex systems like the brain don't generate consciousness from scratch; they simply organize fundamental consciousness in a way that allows for sophisticated behaviors like memory and self-awareness.
When sped up, a bean sprout's movement reveals clear intent, making a 'beeline' for a support rather than flailing randomly. Our slow perception relative to plants makes us misinterpret their deliberate actions as passive growth, highlighting a fundamental bias in how we assess intelligence.
Being rooted and unable to escape danger, plants evolved to be highly predictive. They must anticipate changes in light, seasons, and resources to survive. This immobility, often seen as a weakness, is actually the evolutionary driver for a sophisticated form of forward-thinking intelligence.
The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.
Some AI pioneers genuinely believe LLMs can become conscious because they hold a reductionist view of humanity. By defining consciousness as an 'uninteresting, pre-scientific' concept, they lower the bar for sentience, making it plausible for a complex system to qualify. This belief is a philosophical stance, not just marketing hype.
The assumption that intelligence requires a big brain is flawed. Tiny spiders perform complex tasks like weaving orb webs with minuscule brains, sometimes by cramming neural tissue into their legs. This suggests efficiency, not size, drives cognitive capability, challenging our vertebrate-centric view of intelligence.
Neuroscientist Mark Soames posits that consciousness isn't about higher-order thought but arises from the feeling of uncertainty when basic, conflicting needs must be resolved (e.g., being both hungry and tired). This primitive, embodied decision-making process is the foundational spark of conscious experience.