By storing all tasks and notes in local, plain-text Markdown files, you can use an LLM as a powerful semantic search engine. Unlike keyword search, it can find information even if you misremember details, inferring your intent to locate the correct file across your entire knowledge base.

Related Insights

To prevent an AI agent from repeating mistakes across coding sessions, create 'agents.md' files in your codebase. These act as a persistent memory, providing context and instructions specific to a folder or the entire repo. The agent reads these files before working, allowing it to learn from past iterations and improve over time.

To maximize an AI assistant's effectiveness, pair it with a persistent knowledge store like Obsidian. By feeding past research outputs back into Claude as markdown files, the user creates a virtuous cycle of compounding knowledge, allowing the AI to reference and build upon previous conclusions for new tasks.

Instead of one large context file, create a library of small, specific files (e.g., for different products or writing styles). An index file then guides the LLM to load only the relevant documents for a given task, improving accuracy, reducing noise, and allowing for 'lazy' prompting.

Instead of using siloed note-taking apps, structure all your knowledge—code, writing, proposals, notes—into a single GitHub monorepo. This creates a unified, context-rich environment that any AI coding assistant can access. This approach avoids vendor lock-in and provides the AI with a comprehensive "second brain" to work from.

Instead of a complex database, store content for personal AI tools as simple Markdown files within the code repository. This makes information, like research notes, easily renderable in a web UI and directly accessible by AI agents for queries, simplifying development and data management for N-of-1 applications.

To make company strategy more accessible, Zapier used Google's NotebookLM to create a central AI 'companion.' It ingests all strategy docs, meeting transcripts, and plans, allowing any employee to ask questions and understand how their work connects to the bigger picture.

Codex lacks formal custom commands. You can achieve the same result by storing detailed prompts and templates in local files (e.g., meeting summaries, PRD structures). Reference these files with the '@' symbol in your prompts to apply consistent instructions and formatting to your tasks.

Tools like Granola.ai offer a key advantage by recording locally without joining calls. This privacy, combined with the ability to search across all meeting transcripts for specific topics, turns meeting notes into a queryable knowledge base for the user, rather than just a simple record.

Building a comprehensive context library can be daunting. A simple and effective hack is to end each work session by asking the AI, "What did you learn today that we should document?" The AI can then self-generate the necessary context files, iteratively building its own knowledge base.

Unlike general-purpose LLMs, Google's NotebookLM exclusively uses your uploaded source materials (docs, transcripts, videos) to answer queries. This prevents hallucinations and allows marketing teams to create a reliable, searchable knowledge base for onboarding, product launches, and content strategy.

Treat Your LLM as a Semantic Search Engine for Your Personal Markdown Notes | RiffOn