Contrary to fears of customer backlash, data from Bret Taylor's company Sierra shows that AI agents identifying themselves as AI—and even admitting they can make mistakes—builds trust. This transparency, combined with AI's patience and consistency, often results in customer satisfaction scores that are higher than those for previous human interactions.

Related Insights

Convincing users to adopt AI agents hinges on building trust through flawless execution. The key is creating a "lightbulb moment" where the agent works so perfectly it feels life-changing. This is more effective than any incentive, and advances in coding agents are now making such moments possible for general knowledge work.

Building loyalty with AI isn't about the technology, but the trust it engenders. Consumers, especially younger generations, will abandon AI after one bad experience. Providing a transparent and easy option to connect with a human is critical for adoption and preventing long-term brand damage.

Don't worry if customers know they're talking to an AI. As long as the agent is helpful, provides value, and creates a smooth experience, people don't mind. In many cases, a responsive, value-adding AI is preferable to a slow or mediocre human interaction. The focus should be on quality of service, not on hiding the AI.

Customers are more willing to disclose sensitive or embarrassing information, like a pending missed payment, to an AI agent than to a human. This non-judgmental interaction elicits more truthful and complete context, leading to better outcomes for all parties.

As AI becomes more integrated into marketing, the average consumer remains wary. To succeed, brands need to proactively increase transparency and authenticity, emphasizing the human element behind their operations to build trust and overcome customer skepticism about AI-driven engagement.

Deciding whether to disclose AI use in customer interactions should be guided by context and user expectations. For simple, transactional queries, users prioritize speed and accuracy over human contact. However, in emotionally complex situations, failing to provide an expected human connection can damage the relationship.

Stitch Fix found that providing context for its AI suggestions, especially for items outside a user's comfort zone, acts as an "amplifier." This transparency builds customer trust in the algorithm and leads to stronger, more valuable feedback signals, which in turn improves future personalization.

SaaStr tested both disclosing and hiding that their outreach came from AI agents and found it made no difference in response rates. As long as the email is relevant and useful, prospects are willing to engage, proving that value trumps the human-versus-AI distinction in sales communication.

Companies aren't using AI to cut staff but to handle routine tasks, allowing agents to manage complex, emotional issues. This transforms the agent's role from transactional support to high-value relationship management, requiring more empathy and problem-solving skills, not less.

The AI user research platform Listen discovered a key psychological advantage: people are less filtered and more truthful when speaking with an AI. This tendency to be more honest with a non-human interviewer allows companies to gather more authentic feedback that is more predictive of actual future customer behavior.