The same LLM-generated text can feel robotic in a terminal or playground but becomes more human-like and even unnerving when presented within a familiar UI like Reddit's. This "medium is the message" effect suggests that the presentation layer is critical in shaping our perception of AI's humanity.
OpenAI has publicly acknowledged that the em-dash has become a "neon sign" for AI-generated text. They are updating their model to use it more sparingly, highlighting the subtle cues that distinguish human from machine writing and the ongoing effort to make AI outputs more natural and less detectable.
Current helpful, harmless chatbots provide a misleadingly narrow view of AI's nature. A better mental model is the 'Shoggoth' meme: a powerful, alien, pre-trained intelligence with a thin veneer of user-friendliness. This better captures the vast, unpredictable, and potentially strange space of possible AI minds.
To foster appropriate human-AI interaction, AI systems should be designed for "emotional alignment." This means their outward appearance and expressions should reflect their actual moral status. A likely sentient system should appear so to elicit empathy, while a non-sentient tool should not, preventing user deception and misallocated concern.
Counterintuitively, AI responses that are too fast can be perceived as low-quality or pre-scripted, harming user trust. There is a sweet spot for response time; a slight, human-like delay can signal that the AI is actually "thinking" and generating a considered answer.
Dr. Richard Wallace argues that chatbots' perceived intelligence reflects human predictability, not machine consciousness. Their ability to converse works because most human speech repeats things we've said or heard. If humans were truly original in every utterance, predictive models would fail, showing we are more 'robotic' than we assume.
The current chatbot interface is not the final form for AI. Drawing a parallel to the personal computer's evolution from text prompts to GUIs and web browsers, Marc Andreessen argues that radically different and superior user experiences for AI are yet to be invented.
The debate over AI consciousness isn't just because models mimic human conversation. Researchers are uncertain because the way LLMs process information is structurally similar enough to the human brain that it raises plausible scientific questions about shared properties like subjective experience.
The hosts' visceral reactions to Sora—describing it as making their "skin crawl" and feeling "unsafe"—suggest the Uncanny Valley is a psychological hurdle. Overcoming this negative, almost primal response to AI-generated humans may be a bigger challenge for adoption than achieving perfect photorealism.
Despite models being technically multimodal, the user experience often falls short. Gemini's app, for example, requires users to manually switch between text and image modes. This clumsy UI breaks the illusion of a seamless, intelligent agent and reveals a disconnect between powerful backend capabilities and intuitive front-end design.
Instead of forcing AI to be as deterministic as traditional code, we should embrace its "squishy" nature. Humans have deep-seated biological and social models for dealing with unpredictable, human-like agents, making these systems more intuitive to interact with than rigid software.