Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While generative AI models can hallucinate with low stakes, industrial AI cannot afford errors. This has created a premium for companies with unique, real-world datasets that are verifiable and critical for high-stakes decisions where failure could be catastrophic, like an explosion.

Related Insights

To avoid AI hallucinations, Square's AI tools translate merchant queries into deterministic actions. For example, a query about sales on rainy days prompts the AI to write and execute real SQL code against a data warehouse, ensuring grounded, accurate results.

The stakes for data quality are now higher than ever. An agent pulling the wrong document has severe consequences, while one with access to clean information provides a huge competitive edge. This dynamic will compel organizations to adopt better documentation and data organization practices.

Public internet data has been largely exhausted for training AI models. The real competitive advantage and source for next-generation, specialized AI will be the vast, untapped reservoirs of proprietary data locked inside corporations, like R&D data from pharmaceutical or semiconductor companies.

For applications in banking, insurance, or healthcare, reliability is paramount. Startups that architect their systems from the ground up to prevent hallucinations will have a fundamental advantage over those trying to incrementally reduce errors in general-purpose models.

Roland Bush asserts that foundational LLMs alone are insufficient and dangerous for industrial applications due to their unreliability. He argues that achieving the required 95%+ accuracy depends on augmenting these models with highly specific, proprietary data from machines, operations, and past fixes.

To solve data integrity issues with unstructured information like corporate announcements, multiple competing AI models can be used to reach a consensus. By having models from OpenAI, Google, and Anthropic agree on the key data points, a highly reliable 'unified golden record' can be established and immutably stored on-chain.

To deploy LLMs in high-stakes environments like finance, combine them with deterministic checks. For example, use a traditional algorithm to calculate cash flow and only surface the LLM's answer if it falls within an acceptable range. This prevents hallucinations and ensures reliability.

AI can generate vast amounts of content, but its value is limited by our ability to verify its accuracy. This is fast for visual outputs (images, UI) where our eyes instantly spot flaws, but slow and difficult for abstract domains like back-end code, math, or financial data, which require deep expertise to validate.

As AI commoditizes software creation, the primary source of sustainable value shifts from the software itself to the unique, high-quality data that AI agents use for decision-making. Businesses must re-center their strategy around data as the core asset.

A shocking 30% of generative AI projects are abandoned after the proof-of-concept stage. The root cause isn't the AI's intelligence, but foundational issues like poor data quality, inadequate risk controls, and escalating costs, all of which stem from weak data management and infrastructure.