Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

While AI proofs-of-concept are easy, SAP's CTO states the real engineering hurdle is scaling reliably. The complexity lies in managing thousands of APIs, handling massive document volumes, and applying granular, user-specific context (like regional policies) consistently and accurately.

Related Insights

Even the most advanced AI is ineffective without business context. The CEO estimates 90% of crucial company knowledge—strategy, rationale, priorities—is undocumented and simply "floats in the air." This lack of structured, accessible context is a bigger barrier to AI adoption than the technology itself.

SAP’s CTO views AI not as a feature but a fundamental architectural shift akin to the cloud transition. It requires re-engineering software at three levels: creating dynamic 'Generative UIs', automating 'Business Processes' with agents, and building a unified 'Data Layer' to power intelligence.

The primary barrier for enterprise AI is the 'context gap.' Models trained on public data have no understanding of your specific business—its metrics, language, or history. The key is building infrastructure to feed this proprietary context to the AI, not waiting for smarter models.

Despite AI models showing dramatic improvements, enterprise adoption is slow. The key barriers are not capability gaps but concerns around reliability, safety, compliance, and the inability to predictably measure and upgrade performance in a corporate environment. This is an operational challenge, not a technical one.

Many organizations excel at building accurate AI models but fail to deploy them successfully. The real bottlenecks are fragile systems, poor data governance, and outdated security, not the model's predictive power. This "deployment gap" is a critical, often overlooked challenge in enterprise AI.

While AI models improved 40-60% and consumer use is high, only 5% of enterprise GenAI deployments are working. The bottleneck isn't the model's capability but the surrounding challenges of data infrastructure, workflow integration, and establishing trust and validation, a process that could take a decade.

The primary reason multi-million dollar AI initiatives stall or fail is not the sophistication of the models, but the underlying data layer. Traditional data infrastructure creates delays in moving and duplicating information, preventing the real-time, comprehensive data access required for AI to deliver business value. The focus on algorithms misses this foundational roadblock.

AI's promise to revolutionize enterprise work is hindered by legacy systems like SAP. Their critical domain knowledge isn't in a clean data layer but embedded in complex UIs and middleware. This "data gravity" will significantly slow down the pace of AI integration in large corporations.

The excitement around AI capabilities often masks the real hurdle to enterprise adoption: infrastructure. Success is not determined by the model's sophistication, but by first solving foundational problems of security, cost control, and data integration. This requires a shift from an application-centric to an infrastructure-first mindset.

The key to valuable enterprise AI is solving the underlying data problem first. Knowledge is fragmented across systems and employee heads. Build a platform to unify this data before applying AI, which becomes the final, easier step.