Unlike simple chat models that provide answers to questions, AI agents are designed to autonomously achieve a goal. They operate in a continuous 'observe, think, act' loop to plan and execute tasks until a result is delivered, moving beyond the back-and-forth nature of chat.
Platforms for running AI agents are called 'agent harnesses.' Their primary function is to provide the infrastructure for the agent's 'observe, think, act' loop, connecting the LLM 'brain' to external tools and context files, similar to how a car's chassis supports its engine.
Instead of relying on platform-specific, cloud-based memory, the most robust approach is to structure an agent's knowledge in local markdown files. This creates a portable and compounding 'AI Operating System' that ensures your custom context and skills are never locked into a single vendor.
Agents don't automatically remember preferences across sessions. To fix this, create a `memory.md` file and instruct the agent's system prompt to record corrections and new information there. This manually builds a persistent, compounding memory, making the agent smarter over time.
With AI agents, the key to great results is not about crafting complex prompts. Instead, it's about 'context engineering'—loading your agent with rich information via files like 'agents.md'. This allows simple commands like 'write a cold email' to yield highly customized and effective outputs.
Model-Context Protocol (MCP) is a standardized layer that allows an LLM to communicate with various software tools without needing custom integrations for each. It acts like a universal translator, enabling the LLM to 'speak English' while the MCP handles communication with each tool's unique API.
Treat AI 'skills' as Standard Operating Procedures (SOPs) for your agent. By packaging a multi-step process, like creating a custom proposal, into a '.skill' file, you can simply invoke its name in the future. This lets the agent execute the entire workflow without needing repeated instructions.
Go beyond single-use skills by chaining them together. For instance, a daily 'morning brief' skill can be designed to automatically trigger a 'podcast guest research' skill whenever a podcast is detected on your calendar. This creates complex, multi-layered automations that run without manual intervention.
To keep your AI agent efficient, differentiate between global and project-level skills and context files. General-purpose tools, like a text truncation skill, should be global. Specific processes, like a referral template, should be kept at the project level to avoid cluttering every interaction.
Instead of building skills from scratch, first complete a task through a back-and-forth conversation with your agent. Once you're satisfied with the result, instruct the agent to 'create a skill for what we just did.' It will then codify that successful process into a reusable file for future use.
