While vector search is a common approach for RAG, Anthropic found it difficult to maintain and a security risk for enterprise codebases. They switched to "agentic search," where the AI model actively uses tools like grep or find to locate code, achieving similar accuracy with a cleaner deployment.

Related Insights

The power of tools like Claude Code comes from giving the AI access to fundamental command-line tools (e.g., `bash`, `grep`). This allows the AI to compose novel solutions and lets product teams define new features using simple English prompts rather than hard-coded logic.

Embedding-based RAG for code search is falling out of favor because its arbitrary chunking often fails to capture full semantic context. Simpler, more direct approaches like agent-based search using tools like `grep` are proving more reliable and scalable for retrieving relevant code without the maintenance overhead of embeddings.

The effectiveness of agentic AI in complex domains like IT Ops hinges on "context engineering." This involves strategically selecting the right data (logs, metrics) to feed the LLM, preventing garbage-in-garbage-out, reducing costs, and avoiding hallucinations for precise, reliable answers.

According to IBM's AI Platform VP, Retrieval-Augmented Generation (RAG) was the killer app for enterprises in the first year after ChatGPT's release. RAG allows companies to connect LLMs to their proprietary structured and unstructured data, unlocking immense value from existing knowledge bases and proving to be the most powerful initial methodology.

Enterprises are trapped by decades of undocumented code. Rather than ripping and replacing, agentic AI can analyze and understand these complex systems. This enables redesign from the inside out and modernizes the core of the business, bridging the gap between business and IT.

To improve the quality and accuracy of an AI agent's output, spawn multiple sub-agents with competing or adversarial roles. For example, a code review agent finds bugs, while several "auditor" agents check for false positives, resulting in a more reliable final analysis.

Teams often agonize over which vector database to use for their Retrieval-Augmented Generation (RAG) system. However, the most significant performance gains come from superior data preparation, such as optimizing chunking strategies, adding contextual metadata, and rewriting documents into a Q&A format.

The recent leap in AI coding isn't solely from a more powerful base model. The true innovation is a product layer that enables agent-like behavior: the system constantly evaluates and refines its own output, leading to far more complex and complete results than the LLM could achieve alone.

While complex RAG pipelines with vector stores are popular, leading code agents like Anthropic's Claude Code demonstrate that simple "agentic retrieval" using basic file tools can be superior. Providing an agent a manifest file (like `lm.txt`) and a tool to fetch files can outperform pre-indexed semantic search.

Historically, developer tools adapted to a company's codebase. The productivity gains from AI agents are so significant that the dynamic has flipped: for the first time, companies are proactively changing their code, logging, and tooling to be more 'agent-friendly,' rather than the other way around.

Anthropic's Claude Code Ditched Vector Search for More Accurate "Agentic Search" | RiffOn