AI21 Labs' CMO Sharon Argov suggests openly discussing AI's potential for mistakes. This shifts the conversation from the technology's flaws to how an organization can manage the 'cost of error,' turning a negative into a strategic discussion about risk management and trustworthiness.
When deploying AI tools, especially in sales, users exhibit no patience for mistakes. While a human making an error receives coaching and a second chance, an AI's single failure can cause users to abandon the tool permanently due to a complete loss of trust.
The primary problem for AI creators isn't convincing people to trust their product, but stopping them from trusting it too much in areas where it's not yet reliable. This "low trustworthiness, high trust" scenario is a danger zone that can lead to catastrophic failures. The strategic challenge is managing and containing trust, not just building it.
Leaders must resist the temptation to deploy the most powerful AI model simply for a competitive edge. The primary strategic question for any AI initiative should be defining the necessary level of trustworthiness for its specific task and establishing who is accountable if it fails, before deployment begins.
Instead of promising a flawless implementation, build trust by telling prospects where issues commonly arise and what your process is to mitigate them. Acknowledging potential bumps in the road shows you have experience and a realistic plan, making you a more credible partner than a salesperson who promises perfection.
Customers are hesitant to trust a black-box AI with critical operations. The winning business model is to sell a complete outcome or service, using AI internally for a massive efficiency advantage while keeping humans in the loop for quality and trust.
Currently, AI innovation is outpacing adoption, creating an 'adoption gap' where leaders fear committing to the wrong technology. The most valuable AI is the one people actually use. Therefore, the strategic imperative for brands is to build trust and reassure customers that their platform will seamlessly integrate the best AI, regardless of what comes next.
Moonshot AI overcomes customer skepticism in its AI recommendations by focusing on quantifiable outcomes. Instead of explaining the technology, they demonstrate value by showing clients the direct increase in revenue from the AI's optimizations. Tangible financial results become the ultimate trust-builder.
Implementing trust isn't a massive, year-long project. It's about developing a "muscle" for small, consistent actions like adding a badge, clarifying data retention, or citing sources. These low-cost, high-value changes can be integrated into regular product development cycles.
AI models tend to be overly optimistic. To get a balanced market analysis, explicitly instruct AI research tools like Perplexity to act as a "devil's advocate." This helps uncover risks, challenge assumptions, and makes it easier for product managers to say "no" to weak ideas quickly.
To persuade risk-averse leaders to approve unconventional AI initiatives, shift the focus from the potential upside to the tangible risks of standing still. Paint a clear picture of the competitive disadvantages and missed opportunities the company will face by failing to act.