/
© 2026 RiffOn. All rights reserved.
  1. The Lobster Talks Podcast by Lobster Capital
  2. From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev
From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital · Sep 23, 2025

From a failed startup to YC, Dexter shares his pivot to CodeLayer, using context engineering to make AI agents effective in complex codebases.

Horizontal Dev Tools Only Succeed When Abstracting a New Technical Standard

Horizontal developer tools struggle in fragmented markets. Their success is often tied to the emergence of a new, widely adopted standard (e.g., SAML 2.0 for Auth0). This creates a universal, complex problem that many developers are happy to outsource, providing a clear value proposition for the tool.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

The Data Tools "Party" Ended Because the TAM Was Smaller Than Believed

The boom in tools for data teams faded because the Total Addressable Market (TAM) was overestimated. Investors and founders pattern-matched the data space to larger markets like cloud and dev tools, but the actual number of teams with the budget and need for sophisticated data tooling proved to be much smaller.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

Elite AI Companies Avoid Frameworks, Opting for Direct LLM Calls

The top 1% of AI companies making significant revenue don't rely on popular frameworks like Langchain. They gain more control and performance by using small, direct LLM calls for specific application parts. This avoids the black-box abstractions of frameworks, which are more common among the other 99% of builders.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

A Simple "Human-in-the-Loop" Feature Can Become a Standalone Product

The founder's startup idea originated from a side feature in another project: a "SQL janitor" AI that needed human approval before dropping tables. This single safety feature, which allowed an agent to request help via Slack, was so compelling it became the core of a new, revenue-generating company within weeks.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

Elite AI Engineers Use "Context Compaction" to Prevent Agent Performance Decay

Long-running AI agent conversations degrade in quality as the context window fills. The best engineers combat this with "intentional compaction": they direct the agent to summarize its progress into a clean markdown file, then start a fresh session using that summary as the new, clean input. This is like rebooting the agent's short-term memory.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

Future Dev Teams Gain Leverage by Reviewing AI Plans, Not AI-Generated Code

As AI writes most of the code, the highest-leverage human activity will shift from reviewing pull requests to reviewing the AI's research and implementation plans. Collaborating on the plan provides a narrative journey of the upcoming changes, allowing for high-level course correction before hundreds of lines of bad code are ever generated.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

A "Research, Plan, Implement" Workflow Unlocks AI in Complex Codebases

To get AI agents to perform complex tasks in existing code, a three-stage workflow is key. First, have the agent research and objectively document how the codebase works. Second, use that research to create a step-by-step implementation plan. Finally, execute the plan. This structured approach prevents the agent from wasting context on discovery during implementation.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago

AI-Generated Code Creates a Hidden "Rework Tax" Inflating Productivity Metrics

While AI coding assistants appear to boost output, they introduce a "rework tax." A Stanford study found AI-generated code leads to significant downstream refactoring. A team might ship 40% more code, but if half of that increase is just fixing last week's AI-generated "slop," the real productivity gain is much lower than headlines suggest.

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev thumbnail

From Chaos to Code: HumanLayer’s Playbook for Agent-Driven Dev

The Lobster Talks Podcast by Lobster Capital·5 months ago