When buying AI solutions, demand transparency from vendors about the specific models and prompts they use. Mollick argues that 'we use a prompt' is not a defensible 'secret sauce' and that this transparency is crucial for auditing results and ensuring you aren't paying for outdated or flawed technology.

Related Insights

The need for explicit user transparency is most critical for nondeterministic systems like LLMs, where even creators don't always know why an output was generated. Unlike a simple rules engine with predictable outcomes, AI's "black box" nature requires giving users more context to build trust.

In an era of opaque AI models, traditional contractual lock-ins are failing. The new retention moat is trust, which requires radical transparency about data sources, AI methodologies, and performance limitations. Customers will not pay long-term for "black box" risks they cannot understand or mitigate.

To ensure AI labs don't provide specially optimized private endpoints for evaluation, the firm creates anonymous accounts to test the same public models everyone else uses. This "mystery shopper" policy maintains the integrity and independence of their results.

Companies can build authority and community by transparently sharing the specific third-party AI agents and tools they use for core operations. This "open source" approach to the operational stack serves as a high-value, practical playbook for others in the ecosystem, building trust.

The current trend toward closed, proprietary AI systems is a misguided and ultimately ineffective strategy. Ideas and talent circulate regardless of corporate walls. True, defensible innovation is fostered by openness and the rapid exchange of research, not by secrecy.

Many AI tools expose the model's reasoning before generating an answer. Reading this internal monologue is a powerful debugging technique. It reveals how the AI is interpreting your instructions, allowing you to quickly identify misunderstandings and improve the clarity of your prompts for better results.

To avoid a future where a few companies control AI and hold society hostage, the underlying intelligence layer must be commoditized. This prevents "landlords" of proprietary models from extracting rent and ensures broader access and competition.

To manage risks from 'shadow IT' or third-party AI tools, product managers must influence the procurement process. Embed accountability by contractually requiring vendors to answer specific questions about training data, success metrics, update cadence, and decommissioning plans.

The success of your AI tool depends heavily on the vendor's human experts. Don't get stuck with a sales rep who doesn't understand the product. Demand access to their solution architects and onboarding specialists *before* you sign, ensuring you have a capable partner to guide your implementation.

The goal for trustworthy AI isn't simply open-source code, but verifiability. This means having mathematical proof, like attestations from secure enclaves, that the code running on a server exactly matches the public, auditable code, ensuring no hidden manipulation.