Marc Benioff asserts that the true value in enterprise AI comes from grounding LLMs in a company's specific data. The success of tools like Slackbot isn't from a clever prompt, but from its access to the user's private context (messages, files, history), which commodity models on the public web lack, creating a defensible moat.

Related Insights

Current LLMs are intelligent enough for many tasks but fail because they lack access to complete context—emails, Slack messages, past data. The next step is building products that ingest this real-world context, making it available for the model to act upon.

The key for enterprises isn't integrating general AI like ChatGPT but creating "proprietary intelligence." This involves fine-tuning smaller, custom models on their unique internal data and workflows, creating a competitive moat that off-the-shelf solutions cannot replicate.

The LLM itself only creates the opportunity for agentic behavior. The actual business value is unlocked when an agent is given runtime access to high-value data and tools, allowing it to perform actions and complete tasks. Without this runtime context, agents are merely sophisticated Q&A bots querying old data.

Since LLMs are commodities, sustainable competitive advantage in AI comes from leveraging proprietary data and unique business processes that competitors cannot replicate. Companies must focus on building AI that understands their specific "secret sauce."

The AI revolution may favor incumbents, not just startups. Large companies possess vast, proprietary datasets. If they quickly fine-tune custom LLMs with this data, they can build a formidable competitive moat that an AI startup, starting from scratch, cannot easily replicate.

Salesforce CEO Marc Benioff claims large language models (LLMs) are becoming commoditized infrastructure, analogous to disk drives. He believes the idea of a specific model providing a sustainable competitive advantage ('moat') has 'expired,' suggesting long-term value will shift to applications, proprietary data, and distribution.

Ali Ghodsi argues that while public LLMs are a commodity, the true value for enterprises is applying AI to their private data. This is impossible without first building a modern data foundation that allows the AI to securely and effectively access and reason on that information.

AI models are becoming commodities; the real, defensible value lies in proprietary data and user context. The correct strategy is for companies to use LLMs to enhance their existing business and data, rather than selling their valuable context to model providers for pennies on the dollar.

If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.

General AI models understand the world but not a company's specific data. The X-Lake reasoning engine provides a crucial layer that connects to an enterprise's varied data lakes, giving AI agents the context needed to operate effectively on internal data at a petabyte scale.