Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

An unmaintained Agent OS has a shelf life of about eight weeks before context files are outdated and skills become irrelevant. To ensure compounding value, you must periodically conduct retrospectives with your agents, auditing which parts of the system are underutilized or stale and need updating.

Related Insights

Avoid creating a single, massive context document that quickly becomes stale. Instead, maintain 3-5 small, focused, and dated files on specific topics (e.g., team, product). Treat context as an ongoing practice of curation: whenever you re-explain something to the AI, it should be added to a context file.

Enable agents to improve on their own by scheduling a recurring 'self-review' process. The agent analyzes the results of its past work (e.g., social media engagement on posts it drafted), identifies what went wrong, and automatically updates its own instructions to enhance future performance.

AI is not a 'set and forget' solution. An agent's effectiveness directly correlates with the amount of time humans invest in training, iteration, and providing fresh context. Performance will ebb and flow with human oversight, with the best results coming from consistent, hands-on management.

For decades, keeping documentation updated was a low-priority task. Now, with AI support agents relying on this content as their source of truth, outdated information leads to immediate, tangible failures. This creates the urgent business case to finally solve knowledge decay.

Task your AI agent with its own maintenance by creating a recurring job for it to analyze its own files, skills, and schedules. This allows the AI to proactively identify inefficiencies, suggest optimizations, and find bugs, such as a faulty cron scheduler.

Long-running AI agent conversations degrade in quality as the context window fills. The best engineers combat this with "intentional compaction": they direct the agent to summarize its progress into a clean markdown file, then start a fresh session using that summary as the new, clean input. This is like rebooting the agent's short-term memory.

Unlike traditional, long-lasting infrastructure, AI skills have a short half-life due to rapid model updates and changing contexts. Treat them as iterative, ephemeral assets that must be re-evaluated on a monthly basis to remain effective.

Treat custom AI agents like junior employees, not finished software. They require daily check-ins to monitor for bugs, performance issues, and regressions. There is no "set and forget"—a human must actively manage the agent every day for it to succeed.

Building a comprehensive context library can be daunting. A simple and effective hack is to end each work session by asking the AI, "What did you learn today that we should document?" The AI can then self-generate the necessary context files, iteratively building its own knowledge base.

The underlying infrastructure for AI agents ('harnesses') becomes obsolete roughly every six months due to rapid advances in AI models. At Notion, this means completely rewriting the harness multiple times a year, demanding a culture comfortable with constantly rebuilding core systems and discarding previous assumptions.