Use L1 metrics (lagging indicators like pipeline generated) to identify problems. Then, review a prioritized list of L2 metrics (leading indicators like sequence reply rates) to find the cause. Crucially, stop and fix the *first* L2 metric that is off-target, rather than analyzing all of them, to apply the most effective fix.
When pipeline is down, the default reaction is to increase volume (more SDRs, more events). This is a flawed guess that ignores process efficiency. The real leverage comes from understanding the conversion effectiveness of existing activities, not just adding more inputs to a broken system.
The company's overall win rate was low (6-7%) and decreasing. Analysis showed this decline mirrored a drop in marketing 'signals' (e.g., event attendance, content downloads) before an opportunity was created. This provided a clear data link between mid-funnel marketing activities and sales success.
Traditional funnels jump from a marketing signal (like an MQL) to an opportunity, creating a blind spot. They miss the 'Engagement' period of initial interaction and the 'Prospecting' phase of active sales pursuit. Ignoring these stages makes it impossible to diagnose performance issues or identify improvement levers.
Focusing on successful conversions misses the much larger story. Digging into the reasons for the 85% of rejected leads uncovers systemic issues in targeting, messaging, sales process, and data hygiene, offering a far greater opportunity for funnel improvement than simply optimizing wins.
Encourage sales and BDR teams to disqualify leads and close-loss deals quickly. This 'fail fast' approach cleans the pipeline, focuses effort on viable opportunities, and provides a rapid, clear feedback loop to marketing on lead quality and campaign effectiveness.
When growth stalls, blaming a broad area like 'sales' is ineffective. A simple weekly scorecard forces founders to drill down into specific metrics like lead volume vs. conversion rate. This pinpoints the actual operational drag, turning a large, unsolvable problem into a focused, actionable one.
Metrics like product utilization, ROI, or customer happiness (NPS) are often correlated with retention but don't cause it. Focusing on these proxies wastes energy. Instead, identify the one specific event (e.g., a team sending 2,000 Slack messages) that causally leads to non-churn.
Instead of debating multi-touch attribution, first identify the single, independent event that caused a sales rep to engage a prospect. This "trigger" (e.g., demo request, MQL score) reveals the true efficiency of your GTM motions, which is a more fundamental problem to solve.
With thousands of potential buying signals available, focus is critical. To prioritize, evaluate each signal against two vectors: the expected volume (e.g., how many website visits) and the hypothesized conversion rate to the next funnel stage. This framework allows you to stack rank opportunities and test the highest-potential signals first.
Don't jump directly to optimizing for high-level business outcomes like retention. Instead, sequence your North Star metric. First, focus the team on driving foundational user engagement. Only after establishing that behavior should you shift the primary metric to a direct business impact like revenue or retention.