The effectiveness of enterprise AI agents is limited not by data access, but by the absence of context for *why* decisions were made. 'Context graphs' aim to solve this by capturing 'decision traces'—exceptions, precedents, and overrides that currently live in Slack threads and employee's heads, creating a true source of truth for automation.
Current LLMs are intelligent enough for many tasks but fail because they lack access to complete context—emails, Slack messages, past data. The next step is building products that ingest this real-world context, making it available for the model to act upon.
Effective enterprise AI needs a contextual layer—an 'InstaBrain'—that codifies tribal knowledge. Critically, this memory must be editable, allowing the system to prune old context and prioritize new directives, just as a human team would shift focus from revenue growth one quarter to margin protection the next.
Rather than programming AI agents with a company's formal policies, a more powerful approach is to let them observe thousands of actual 'decision traces.' This allows the AI to discover the organization's emergent, de facto rules—how work *actually* gets done—creating a more accurate and effective world model for automation.
The effectiveness of agentic AI in complex domains like IT Ops hinges on "context engineering." This involves strategically selecting the right data (logs, metrics) to feed the LLM, preventing garbage-in-garbage-out, reducing costs, and avoiding hallucinations for precise, reliable answers.
Off-the-shelf AI models can only go so far. The true bottleneck for enterprise adoption is "digitizing judgment"—capturing the unique, context-specific expertise of employees within that company. A document's meaning can change entirely from one company to another, requiring internal labeling.
Despite AI's capabilities, it lacks the full context necessary for nuanced business decisions. The most valuable work happens when people with diverse perspectives convene to solve problems, leveraging a collective understanding that AI cannot access. Technology should augment this, not replace it.
To build coordinated AI agent systems, firms must first extract siloed operational knowledge. This involves not just digitizing documents but systematically observing employee actions like browser clicks and phone calls to capture unwritten processes, turning this tacit knowledge into usable context for AI.
AI tools like LLMs thrive on large, structured datasets. In manufacturing, critical information is often unstructured 'tribal knowledge' in workers' heads. Dirac’s strategy is to first build a software layer that captures and organizes this human expertise, creating the necessary context for AI to then analyze and add value.
Treat accountability as an engineering problem. Implement a system that logs every significant AI action, decision path, and triggering input. This creates an auditable, attributable record, ensuring that in the event of an incident, the 'why' can be traced without ambiguity, much like a flight recorder after a crash.
The ultimate value of AI will be its ability to act as a long-term corporate memory. By feeding it historical data—ICPs, past experiments, key decisions, and customer feedback—companies can create a queryable "brain" that dramatically accelerates onboarding and institutional knowledge transfer.