Teresa Torres applies the "pair programming" model to every aspect of her work, including writing and task management. This shifts her mental model from using an AI as a tool to collaborating with it as a proactive partner, constantly asking how it can help.
Teresa Torres defined a `/today` slash command in Claude Code. This shortcut triggers a detailed, pre-written prompt that scans her task files, checks for team updates, and generates a prioritized daily to-do list in Obsidian, automating a repetitive and complex morning routine.
To avoid generic AI-generated text, use the LLM as a critic rather than a writer. By providing a detailed style guide that you co-created with the AI, its feedback on your drafts becomes highly specific and aligned with your personal goals, audience, and tone.
Instead of one large context file, create a library of small, specific files (e.g., for different products or writing styles). An index file then guides the LLM to load only the relevant documents for a given task, improving accuracy, reducing noise, and allowing for 'lazy' prompting.
Teresa Torres created a system using Python scripts and Claude to automate her research workflow. The script searches preprint servers like arXiv for keywords daily, and Claude then generates detailed summaries of saved papers, delivering a "research digest" directly to her to-do list each morning.
Building a comprehensive context library can be daunting. A simple and effective hack is to end each work session by asking the AI, "What did you learn today that we should document?" The AI can then self-generate the necessary context files, iteratively building its own knowledge base.
To gain data ownership and enable AI automation, Teresa Torres built a personalized task manager using Claude Code and local Markdown files. This allows her to prompt the AI to directly see and execute items from her to-do list, a capability not possible with third-party tools like Trello.
By storing all tasks and notes in local, plain-text Markdown files, you can use an LLM as a powerful semantic search engine. Unlike keyword search, it can find information even if you misremember details, inferring your intent to locate the correct file across your entire knowledge base.
