The easiest way to teach Claude Code is to instruct it: "Don't make this mistake again; add this to `claude.md`." Since this file is always included in the prompt context, it acts as a permanent, evolving set of instructions and guardrails for the AI.
A little-known feature in Claude Code is the '&' command. Typing it at the end of a prompt pushes the current session to the cloud, allowing you to seamlessly continue the interaction on the Claude mobile app, creating a powerful cross-device workflow.
Enhance pull requests by using Playwright to automatically screen-record a demonstration of the new feature. This video is then attached to the PR, giving code reviewers immediate visual context of the changes, far beyond what static code can show.
Instead of overloading the context window, encapsulate deep domain knowledge into "skill" files. Claude Code can then intelligently pull in this information "just-in-time" when it needs to perform a specific task, like following a complex architectural pattern.
Kieran Klaassen's "Compound Engineering" philosophy involves planning, working, assessing, and then codifying learnings. This feedback loop teaches the AI what it did wrong, ensuring it won't repeat the same mistakes and making it progressively better with each use.
Instead of a generic code review, use multiple AI agents with distinct personas (e.g., security expert, performance engineer, an opinionated developer like DHH). This simulates a diverse review panel, catching a wider range of potential issues and improvements.
Kieran's custom planning workflow uses sub-agents to research the existing codebase, online best practices, and framework documentation. This "beefier" planning phase grounds the AI in relevant context, leading to higher-quality development plans than the default mode.
Use Playwright to give Claude Code control over a browser for testing. The AI can run tests, visually identify bugs, and then immediately access the codebase to fix the issue and re-validate. This creates a powerful, automated QA and debugging loop.
A new paradigm for AI-driven development is emerging where developers shift from meticulously reviewing every line of generated code to trusting robust systems they've built. By focusing on automated testing and review loops, they manage outcomes rather than micromanaging implementation.
AI can get hyper-focused on a specific task and lose sight of the overall user flow. A dedicated "Spec Flow Analyzer" agent can simulate a user persona and review the entire plan, ensuring all necessary steps are connected and the feature is cohesive from a user's perspective.
