AI agents are simply 'context and actions.' To prevent hallucination and failure, they must be grounded in rich context. This is best provided by a knowledge graph built from the unique data and metadata collected across a platform, creating a powerful, defensible moat.
An autonomous agent is a complete software system, not merely a feature of an LLM. Dell's CTO defines it by four key components: an LLM (for reasoning), a knowledge graph (for specialized memory), MCP (for tool use), and A2A protocols (for agent collaboration).
The effectiveness of enterprise AI agents is limited not by data access, but by the absence of context for *why* decisions were made. 'Context graphs' aim to solve this by capturing 'decision traces'—exceptions, precedents, and overrides that currently live in Slack threads and employee's heads, creating a true source of truth for automation.
Instead of simply adding AI features, treat your AI as the product's most important user. Your unique data, content, and existing functionalities are "superpowers" that differentiate your AI from generic models, creating a durable competitive advantage. This leverages proprietary assets.
The effectiveness of agentic AI in complex domains like IT Ops hinges on "context engineering." This involves strategically selecting the right data (logs, metrics) to feed the LLM, preventing garbage-in-garbage-out, reducing costs, and avoiding hallucinations for precise, reliable answers.
A critical learning at LinkedIn was that pointing an AI at an entire company drive for context results in poor performance and hallucinations. The team had to manually curate "golden examples" and specific knowledge bases to train agents effectively, as the AI couldn't discern quality on its own.
The durable investment opportunities in agentic AI tooling fall into three categories that will persist across model generations. These are: 1) connecting agents to data for better context, 2) orchestrating and coordinating parallel agents, and 3) providing observability and monitoring to debug inevitable failures.
The next frontier for AI isn't just personal assistants but "teammates" that understand an entire team's dynamics, projects, and shared data. This shifts the focus from single-user interactions to collaborative intelligence by building a knowledge graph connecting people and their work.
To build coordinated AI agent systems, firms must first extract siloed operational knowledge. This involves not just digitizing documents but systematically observing employee actions like browser clicks and phone calls to capture unwritten processes, turning this tacit knowledge into usable context for AI.
AI-generated "work slop"—plausible but low-substance content—arises from a lack of specific context. The cure is not just user training but building systems that ingest and index a user's entire work graph, providing the necessary grounding to move from generic drafts to high-signal outputs.
Research shows employees are rapidly adopting AI agents. The primary risk isn't a lack of adoption but that these agents are handicapped by fragmented, incomplete, or siloed data. To succeed, companies must first focus on creating structured, centralized knowledge bases for AI to leverage effectively.