We scan new podcasts and send you the top 5 insights daily.
Instead of a single, monolithic "About Me" file, structure personal context into modular files (e.g., roles, projects, team). This design allows you to provide an AI agent with only the specific information it needs for a given task, which enhances efficiency, relevance, and privacy.
To elevate AI performance, create a structured folder system it can reference. This 'operating system' should include folders for persistent knowledge (e.g., `/knowledge`, `/people`), and active work (`/projects`). Providing this rich, organized context allows the AI to generate highly relevant, non-generic outputs.
Structure AI context into three layers: a short global file for universal preferences, project-specific files for domain rules, and an indexed library of modular context files (e.g., business details) that the AI only loads when relevant, preventing context window bloat.
Instead of one generalist AI assistant, create multiple specialized agents, each with a unique persona (e.g., a creative teacher) defined in a "soul" file. Partition their access to specific data "vaults" (like separate Obsidian folders). This specialization improves output quality and maintains logical, secure boundaries between different life domains.
Instead of one large context file, create a library of small, specific files (e.g., for different products or writing styles). An index file then guides the LLM to load only the relevant documents for a given task, improving accuracy, reducing noise, and allowing for 'lazy' prompting.
To create detailed context files about your business or personal preferences, instruct your AI to act as an interviewer. By answering its questions, you provide the raw material for the AI to then synthesize and structure into a permanent, reusable context file without writing it yourself.
With AI agents, the key to great results is not about crafting complex prompts. Instead, it's about 'context engineering'—loading your agent with rich information via files like 'agents.md'. This allows simple commands like 'write a cold email' to yield highly customized and effective outputs.
To keep your AI agent efficient, differentiate between global and project-level skills and context files. General-purpose tools, like a text truncation skill, should be global. Specific processes, like a referral template, should be kept at the project level to avoid cluttering every interaction.
Most users re-explain their role and situation in every new AI conversation. A more advanced approach is to build a dedicated professional context document and a system for capturing prompts and notes. This turns AI from a stateless tool into a stateful partner that understands your specific needs.
Frame your personal and professional information as a structured set of machine-readable files. This "operating manual" allows AI agents to understand your roles, goals, and constraints without constant re-explanation, just as a developer uses API docs to interact with software.
Instead of explicitly telling an AI agent how to organize its knowledge, simply provide the necessary context. A well-designed agent can figure out what information is important and create its own knowledge files, such as a 'user.md' for personal details or an 'identity.md' for its own persona.