Relying solely on semantic clustering (RAG) is inaccurate for complex domains like code. Blitzy combines a deep, relational knowledge graph with semantic understanding to accurately retrieve context, using the semantic match as a map to the source of truth rather than the truth itself.

Related Insights

For enterprise AI, standard RAG struggles with granular permissions and relationship-based questions. Atlassian's "teamwork graph" maps entities like teams, tasks, and documents. This allows it to answer complex queries like "What did my team do last week?"—a task where simple vector search would fail by just returning top documents.

The concept isn't about fitting a massive codebase into one context window. Instead, it's a sophisticated architecture using a deep relational knowledge graph to inject only the most relevant, line-level context for a specific task at the exact moment it's needed.

Embedding-based RAG for code search is falling out of favor because its arbitrary chunking often fails to capture full semantic context. Simpler, more direct approaches like agent-based search using tools like `grep` are proving more reliable and scalable for retrieving relevant code without the maintenance overhead of embeddings.

While vector search is a common approach for RAG, Anthropic found it difficult to maintain and a security risk for enterprise codebases. They switched to "agentic search," where the AI model actively uses tools like grep or find to locate code, achieving similar accuracy with a cleaner deployment.

Standard Retrieval-Augmented Generation (RAG) systems often fail because they treat complex documents as pure text, missing crucial context within charts, tables, and layouts. The solution is to use vision language models for embedding and re-ranking, making visual and structural elements directly retrievable and improving accuracy.

Retrieval Augmented Generation (RAG) uses vector search to find relevant documents based on a user's query. This factual context is then fed to a Large Language Model (LLM), forcing it to generate responses based on provided data, which significantly reduces the risk of "hallucinations."

AI agents are simply 'context and actions.' To prevent hallucination and failure, they must be grounded in rich context. This is best provided by a knowledge graph built from the unique data and metadata collected across a platform, creating a powerful, defensible moat.

Classic RAG involves a single data retrieval step. Its evolution, "agentic retrieval," allows an AI to perform a series of conditional fetches from different sources (APIs, databases). This enables the handling of complex queries where each step informs the next, mimicking a research process.

Static analysis isn't enough to understand a complex application. Blitzy's onboarding involves spinning up and running a parallel instance of the client's app. This process uncovers hidden runtime dependencies and behaviors, creating a far more accurate knowledge graph than code analysis alone could provide.

While complex RAG pipelines with vector stores are popular, leading code agents like Anthropic's Claude Code demonstrate that simple "agentic retrieval" using basic file tools can be superior. Providing an agent a manifest file (like `lm.txt`) and a tool to fetch files can outperform pre-indexed semantic search.

Semantic Search Alone Is Insufficient; Pair It with Relational Knowledge Graphs for True Context | RiffOn