Jay Schwedelson argues against obsessing over statistical significance in A/B tests, as marketing conditions are too fluid. He suggests focusing on directional data instead. If a test provides 'a little more juice' and moves metrics in the right direction, it's a win worth implementing and building upon.

Related Insights

The concept of a single best day and time to send an email is misleading. Instead, marketers should vary send times throughout the week to reach different segments of their audience. The key metric is the aggregate number of unique individuals engaged weekly, not the performance of a single blast.

Many marketers equate CRO with just A/B testing. However, a successful program is built on two pillars: research (gathering quantitative and qualitative data) and testing (experimentation). Overlooking the research phase leads to uninformed tests and poor results, as it provides the necessary insights for what to test.

When testing copy like titles or subject lines, change only a single modifier word (e.g., add "Quick Fix" to "HR Guide"). This isolates the variable, providing clear learnings about what resonates with your audience, unlike testing two completely different sentences where the "why" is unclear.

Foster a culture of experimentation by reframing failure. A test where the hypothesis is disproven is just as valuable as a 'win' because it provides crucial user insights. The program's success should be measured by the quantity of quality tests run, not the percentage of successful hypotheses.

In an analysis of 50 past email campaigns, ChatGPT's 5.2 model correctly identified the winning A/B test variation 89% of the time without performance data. Marketers can use this predictive capability to vet campaign elements like subject lines and creative before launching live tests, potentially saving time and resources.

Don't attempt traditional A/B testing on a low-traffic website; the results will be statistically invalid. Instead, use qualitative user testing methods like preference tests. This approach provides directional data to guide decisions, which is far more reliable than guesswork or a flawed A/B test.

Instead of only testing minor changes on a finished product, like button color, use A/B testing early in the development process. This allows you to validate broad behavioral science principles, such as social proof, for your specific challenge before committing to a full build.

A counterintuitive yet effective email tactic is capitalizing an entire word in the middle of a subject line, not at the start or end. This simple, cost-free A/B test is trending because it breaks visual patterns in the inbox, leading to a reported 16% open rate increase for B2B and 21% for B2C.

A former Optimizely CMO argues that most B2B companies lack the conversion volume to achieve statistical significance on website A/B tests. Teams waste months on inconclusive experiments for marginal gains instead of focusing on bigger strategic bets that actually move the needle.

Contrary to the common wisdom of using a single call-to-action, an A/B test revealed a newsletter version with five links generated a 152% higher click-through rate than a version with only three. Offering variety can turn passive readers into active clickers.

Prioritize Directional Improvements Over Statistical Significance in Email Tests | RiffOn