Contrary to the hype around creative and unpredictable AI, enterprise clients prioritize reliability, control, and predictability. AI21 Labs' 'Build Boring Agents' campaign leans into this need for solid, responsible AI, positioning 'boring' as a desirable feature.
Convincing users to adopt AI agents hinges on building trust through flawless execution. The key is creating a "lightbulb moment" where the agent works so perfectly it feels life-changing. This is more effective than any incentive, and advances in coding agents are now making such moments possible for general knowledge work.
Customers are hesitant to trust a black-box AI with critical operations. The winning business model is to sell a complete outcome or service, using AI internally for a massive efficiency advantage while keeping humans in the loop for quality and trust.
For companies building AI agents, the key indicator of a successful customer engagement is the availability of well-documented APIs. These APIs are essential for the agent to take action and look up data, which directly enables a superior, elevated experience from day one.
Don't worry if customers know they're talking to an AI. As long as the agent is helpful, provides value, and creates a smooth experience, people don't mind. In many cases, a responsive, value-adding AI is preferable to a slow or mediocre human interaction. The focus should be on quality of service, not on hiding the AI.
For enterprise AI adoption, focus on pragmatism over novelty. Customers' primary concerns are trust and privacy (ensuring no IP leakage) and contextual relevance (the AI must understand their specific business and products), all delivered within their existing workflow.
Contrary to fears of customer backlash, data from Bret Taylor's company Sierra shows that AI agents identifying themselves as AI—and even admitting they can make mistakes—builds trust. This transparency, combined with AI's patience and consistency, often results in customer satisfaction scores that are higher than those for previous human interactions.
Current AI workflows are not fully autonomous and require significant human oversight, meaning immediate efficiency gains are limited. By framing these systems as "interns" that need to be "babysat" and trained, organizations can set realistic expectations and gradually build the user trust necessary for future autonomy.
Vendors fail to connect with SMBs on AI because their messaging is either too technical and intimidating or too aspirational and fluffy. SMB partners and customers want clarity, not hype. They need simple, concrete use cases demonstrating tangible business value like productivity gains or automation, not visions of futuristic robots.
A strong aversion to ChatGPT's overly complimentary and obsequious tone suggests a segment of users desires functional, neutral AI interaction. This highlights a need for customizable AI personas that cater to users who prefer a tool-like experience over a simulated, fawning personality.
AI21 Labs' CMO Sharon Argov suggests openly discussing AI's potential for mistakes. This shifts the conversation from the technology's flaws to how an organization can manage the 'cost of error,' turning a negative into a strategic discussion about risk management and trustworthiness.