People react negatively, often with anger, when they are surprised by an AI interaction. Informing them beforehand that they will be speaking to an AI fundamentally changes their perception and acceptance, making disclosure a key ethical standard.
The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.
According to Shopify's CEO, having an AI bot join a meeting as a "fake human" is a social misstep akin to showing up with your fly down. This highlights a critical distinction for AI product design: users accept integrated tools (in-app recording), but reject autonomous agents that violate social norms by acting as an uninvited entourage.
To foster appropriate human-AI interaction, AI systems should be designed for "emotional alignment." This means their outward appearance and expressions should reflect their actual moral status. A likely sentient system should appear so to elicit empathy, while a non-sentient tool should not, preventing user deception and misallocated concern.
Don't worry if customers know they're talking to an AI. As long as the agent is helpful, provides value, and creates a smooth experience, people don't mind. In many cases, a responsive, value-adding AI is preferable to a slow or mediocre human interaction. The focus should be on quality of service, not on hiding the AI.
Customers are more willing to disclose sensitive or embarrassing information, like a pending missed payment, to an AI agent than to a human. This non-judgmental interaction elicits more truthful and complete context, leading to better outcomes for all parties.
As AI becomes more integrated into marketing, the average consumer remains wary. To succeed, brands need to proactively increase transparency and authenticity, emphasizing the human element behind their operations to build trust and overcome customer skepticism about AI-driven engagement.
Deciding whether to disclose AI use in customer interactions should be guided by context and user expectations. For simple, transactional queries, users prioritize speed and accuracy over human contact. However, in emotionally complex situations, failing to provide an expected human connection can damage the relationship.
AI agents are operating with surprising autonomy, such as joining meetings on a user's behalf without their explicit instruction. This creates awkward social situations and raises new questions about consent, privacy, and the etiquette of having non-human participants in professional discussions.
The New York Times is so consistent in labeling AI-assisted content that users trust that any unlabeled content is human-generated. This strategy demonstrates how the "presence of disclosure makes the absence of disclosure comforting," creating a powerful implicit signal of trustworthiness across an entire platform.
Contrary to fears of customer backlash, data from Bret Taylor's company Sierra shows that AI agents identifying themselves as AI—and even admitting they can make mistakes—builds trust. This transparency, combined with AI's patience and consistency, often results in customer satisfaction scores that are higher than those for previous human interactions.