To hire entrepreneurial talent, Brex offers the chance to solve interesting problems and immediately deploy solutions to its 40,000 customers. This access to instant distribution is a powerful lure that startups can't match.
Brex initially invested in a sophisticated reinforcement learning model for credit underwriting but found it was inferior to a straightforward web research agent. For operational tasks requiring auditable processes, simpler LLM applications are often superior.
Brex actively recruits former and future founders, embracing that they will likely leave to start new companies. This attracts ambitious talent who want to learn at scale before their next venture, creating a powerful employee value proposition.
Brex formed a small, centralized AI team by asking, "What would a company founded today to disrupt Brex look like?" This team operates with the speed and focus of a startup, separate from the main engineering org to avoid corporate inertia.
Brex organizes its AI efforts into three pillars: buying tools for internal efficiency (Corporate), building/buying to reduce operational costs (Operational), and creating AI products that become part of their customers' own AI strategies (Product).
Instead of standardizing on one LLM or coding assistant, Brex offers licenses for several competing options. This employee choice provides clear usage data, giving Brex leverage to resist wall-to-wall deployments and negotiate better vendor contracts.
To familiarize engineers with agentic coding workflows, Brex created a new interview process that requires AI tool usage. They then had every current engineer and manager complete the interview, forcing hands-on experience and revealing skill gaps in a practical setting.
After achieving broad adoption of agentic coding, the new challenge becomes managing the downsides. Increased code generation leads to lower quality, rushed reviews, and a knowledge gap as team members struggle to keep up with the rapidly changing codebase.
Brex avoids internal jealousy of its specialized AI team because its culture prioritizes and rewards direct business impact. Teams driving 60% of revenue feel valued and aren't clamoring for "cooler" AI projects that have less clear, immediate ROI.
Brex's internal AI platform for operations uses Retool for its user interface. This enables non-technical domain experts in the ops team to directly manage and refine prompts, run evaluations, and test new models without needing engineer intervention.
Brex structures its AI teams into small pods, combining young, AI-native talent who think differently with experienced staff engineers who understand the existing codebase, product, and customer needs. This blends novel approaches with practical execution.
For its user assistant, Brex moved beyond a single agent with many tools. Instead, they built a network where specialized sub-agents (e.g., policy, travel) have multi-turn conversations with an orchestrator agent to collaboratively solve complex user requests.
Brex's automated expense auditing employs a multi-agent system. An "audit agent" is optimized for recall, flagging every potential policy violation. A second "review agent" then applies judgment and business context to decide which cases are significant enough to pursue.
