Roland Bush asserts that foundational LLMs alone are insufficient and dangerous for industrial applications due to their unreliability. He argues that achieving the required 95%+ accuracy depends on augmenting these models with highly specific, proprietary data from machines, operations, and past fixes.
Current LLMs are intelligent enough for many tasks but fail because they lack access to complete context—emails, Slack messages, past data. The next step is building products that ingest this real-world context, making it available for the model to act upon.
LLMs have hit a wall by scraping nearly all available public data. The next phase of AI development and competitive differentiation will come from training models on high-quality, proprietary data generated by human experts. This creates a booming "data as a service" industry for companies like Micro One that recruit and manage these experts.
The rapid progress of many LLMs was possible because they could leverage the same massive public dataset: the internet. In robotics, no such public corpus of robot interaction data exists. This “data void” means progress is tied to a company's ability to generate its own proprietary data.
Public internet data has been largely exhausted for training AI models. The real competitive advantage and source for next-generation, specialized AI will be the vast, untapped reservoirs of proprietary data locked inside corporations, like R&D data from pharmaceutical or semiconductor companies.
Since LLMs are commodities, sustainable competitive advantage in AI comes from leveraging proprietary data and unique business processes that competitors cannot replicate. Companies must focus on building AI that understands their specific "secret sauce."
Off-the-shelf AI models can only go so far. The true bottleneck for enterprise adoption is "digitizing judgment"—capturing the unique, context-specific expertise of employees within that company. A document's meaning can change entirely from one company to another, requiring internal labeling.
Salesforce is reintroducing deterministic automation because its generative AI agents struggle with reliability, dropping instructions when given more than eight commands. This pullback signals current LLMs are not ready for high-stakes, consistent enterprise workflows.
AI tools like LLMs thrive on large, structured datasets. In manufacturing, critical information is often unstructured 'tribal knowledge' in workers' heads. Dirac’s strategy is to first build a software layer that captures and organizes this human expertise, creating the necessary context for AI to then analyze and add value.
Ali Ghodsi argues that while public LLMs are a commodity, the true value for enterprises is applying AI to their private data. This is impossible without first building a modern data foundation that allows the AI to securely and effectively access and reason on that information.
If a company and its competitor both ask a generic LLM for strategy, they'll get the same answer, erasing any edge. The only way to generate unique, defensible strategies is by building evolving models trained on a company's own private data.