Get your free personalized podcast brief

We scan new podcasts and send you the top 5 insights daily.

To make his personal AI development manageable, Steve Newman structures his work as a suite of microservices. Each of his 15+ apps is its own project with a separate GitHub repo and database. This modular approach keeps the context window for the AI coding agent small and focused, which he believes is crucial for its effectiveness.

Related Insights

Human developers may prefer longer files, but AI coding assistants process code in smaller chunks. App developer Terry Lynn intentionally keeps his files small (under 400 lines) to reduce the AI's context window usage, prevent it from getting lost, and improve the speed and accuracy of its code generation.

Structure AI context into three layers: a short global file for universal preferences, project-specific files for domain rules, and an indexed library of modular context files (e.g., business details) that the AI only loads when relevant, preventing context window bloat.

Instead of a single, monolithic "About Me" file, structure personal context into modular files (e.g., roles, projects, team). This design allows you to provide an AI agent with only the specific information it needs for a given task, which enhances efficiency, relevance, and privacy.

The path to robust AI applications isn't a single, all-powerful model. It's a system of specialized "sub-agents," each handling a narrow task like context retrieval or debugging. This architecture allows for using smaller, faster, fine-tuned models for each task, improving overall system performance and efficiency.

Instead of using siloed note-taking apps, structure all your knowledge—code, writing, proposals, notes—into a single GitHub monorepo. This creates a unified, context-rich environment that any AI coding assistant can access. This approach avoids vendor lock-in and provides the AI with a comprehensive "second brain" to work from.

A single AI agent struggles with diverse tasks due to context window limitations, similar to how a human gets overwhelmed. The solution is to create a team of specialized agents, each focused on a specific domain (e.g., work, family, sales) to maintain performance and focus.

The most powerful AI systems consist of specialized agents with distinct roles (e.g., individual coaching, corporate strategy, knowledge base) that interact. This modular approach, exemplified by the Holmes, Mycroft, and 221B agents, creates a more robust and scalable solution than a single, all-knowing agent.

Instead of building monolithic agents, create modular sub-workflows that function as reusable 'tools' (e.g., an 'image-to-video' tool). These can be plugged into any number of different agents. This software engineering principle of modularity dramatically speeds up development and increases scalability across your automation ecosystem.

Run separate instances of your AI assistant from different project directories. Each directory contains a configuration file providing specific context, rules, and style guides for that domain (e.g., writing vs. task management), creating specialized, expert assistants.

Build a repository of small, functional tools and research projects. This 'hoard' serves as a powerful, personalized context for AI agents. You can direct them to consult and combine these past solutions to tackle new, complex problems, effectively weaponizing your accumulated experience.