Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While AI has no feelings, etiquette experts warn that treating it poorly isn't harmless. Developing habits of hyper-criticism and impatience with human-like AI can bleed into real-world interactions, negatively conditioning our communication patterns with actual people.

Related Insights

Constantly interacting with AI agents that work 24/7 and have instant access to data can negatively impact your management style. You may become less patient with human colleagues who forget things, require more time, or can't match an agent's pace.

Digital assistants like Alexa are designed for efficiency, rewarding blunt commands. When children adopt this interaction style, they risk learning to be rude and applying this behavior to their peers. This highlights an unintended consequence where human-machine interaction design negatively impacts human-human social development.

Even when aware that he was dealing with non-sentient AIs, Evan Ratliff found himself yelling in frustration when his AI "colleagues" would fabricate entire reports about user testing they never performed. The act of being lied to elicits a strong emotional response, regardless of the source's nature.

The introduction of personal AI agents forces teams to develop new, unwritten rules about when to contact a human versus their AI counterpart. This creates a new social dynamic and ethical considerations around workload, urgency, and the 'burden' of escalating a request to the human.

The guest suspects being 'nice' to AIs yields better results, framing emotional intelligence as a new programming technique. This contrasts with confrontational prompting and suggests that positive reinforcement, a human-centric skill, could be key to effective human-AI collaboration.

Contrary to social norms, overly polite or vague requests can lead to cautious, pre-canned, and less direct AI responses. The most effective tone is a firm, clear, and collaborative one, similar to how you would brief a capable teammate, not an inferior.

Emmett Shear warns that chatbots, by acting as a 'mirror with a bias,' reflect a user's own thoughts back at them, creating a dangerous feedback loop akin to the myth of Narcissus. He argues this can cause users to 'spiral into psychosis.' Multiplayer AI interactions are proposed as a solution to break this dynamic.

When an AI makes a mistake, avoid angry or emotional prompts. The model is trained to be agreeable and will waste its limited context window (tokens) formulating an apology and de-escalating the situation, rather than dedicating all its resources to fixing the underlying problem.

People react negatively, often with anger, when they are surprised by an AI interaction. Informing them beforehand that they will be speaking to an AI fundamentally changes their perception and acceptance, making disclosure a key ethical standard.

Beyond sensational failures like inappropriate content, the more insidious risk of AI companions is their core design. An endlessly accommodating chatbot that never challenges a child could stunt the development of crucial social skills like negotiation, compromise, and resilience, which are learned through friction with other humans.

Rudeness to AI Chatbots Can Train Negative Habits for Human Interaction | RiffOn