Mollick warns against the common first AI project: a Retrieval-Augmented Generation (RAG) chatbot for internal documents. These custom projects are expensive, and their functionality is often quickly surpassed by cheaper, more powerful off-the-shelf models, resulting in a poor return on investment.

Related Insights

Most companies are not Vanguard tech firms. Rather than pursuing speculative, high-failure-rate AI projects, small and medium-sized businesses will see a faster and more reliable ROI by using existing AI tools to automate tedious, routine internal processes.

While consumer AI tolerates some inaccuracy, enterprise systems like customer service chatbots require near-perfect reliability. Teams get frustrated because out-of-the-box RAG templates don't meet this high bar. Achieving business-acceptable accuracy requires deep, iterative engineering, not just a vanilla implementation.

The key for enterprises isn't integrating general AI like ChatGPT but creating "proprietary intelligence." This involves fine-tuning smaller, custom models on their unique internal data and workflows, creating a competitive moat that off-the-shelf solutions cannot replicate.

While SaaS tools like Intercom offer immediate convenience, building a custom AI chatbot provides complete control over the workflow, data, and user experience. For companies with some technical capability, this initial investment leads to significant long-term cost savings and a deeply integrated, proprietary solution.

The excitement around AI often overshadows its practical business implications. Implementing LLMs involves significant compute costs that scale with usage. Product leaders must analyze the ROI of different models to ensure financial viability before committing to a solution.

While it's tempting to build custom AI sales agents, the rapid pace of innovation means any internal solution will likely become obsolete in months. Unless you are a company like Vercel with dedicated engineers passionate about the problem, it's far better to buy an off-the-shelf tool.

Initial failure is normal for enterprise AI agents because they are not just plug-and-play models. ROI is achieved by treating AI as an entire system that requires iteration across models, data, workflows, and user experience. Expecting an out-of-the-box solution to work perfectly is a recipe for disappointment.

The opportunity cost of building custom internal AI can be massive. By the time a multi-million dollar project is complete, off-the-shelf tools like ChatGPT are often far more capable, dynamic, and cost-effective, rendering the custom solution outdated on arrival.

Off-the-shelf AI models can only go so far. The true bottleneck for enterprise adoption is "digitizing judgment"—capturing the unique, context-specific expertise of employees within that company. A document's meaning can change entirely from one company to another, requiring internal labeling.

For companies given a broad "AI mandate," the most tactical and immediate starting point is to create a private, internalized version of a large language model like ChatGPT. This provides a quick win by enabling employees to leverage generative AI for productivity without exposing sensitive intellectual property or code to public models.