You design and execute a major A/B test on a massive sample.
The results turn out to be 99% significant.
You excitedly run to management to declare the test's success and how impactful the results would be if implemented, only to be faced with a barrage of questions and skepticism.
Sound familiar?
A/B testing is the cornerstone of all growth marketing. However, the challenges of this growth tactic often appear after its completion.
A/B tests can attract considerable questions and scrutiny about their design, execution, significance, and implications, often when contradicting official wisdom.
Here are some of the common questions raised -
Was the sample size adequate? Was the segmentation unbiased?
Was the test duration sufficient?
Did we consider the impact of external variables such as seasonality?
Are these results valid and statistically significant?
Why spend time and effort doing such tests? Why not pursue other methods to find growth, such as paid ads?
Is the result being interpreted correctly?
All these questions are valid and justified. But where do they stem from?
First, not everyone understands A/B testing methodology and concepts such as statistical significance and random chance. This could lead to skepticism about the process, the reliability of the results, and even the need for it vs. other growth tactics such as performance marketing.
Second, the results might seem inconsistent with previous findings or with expectations. Humans struggle with "confirmation bias,” which leads us to favor information that confirms our preexisting beliefs or values and dismiss information that contradicts them. It’s normal to expect resistance to the test results if the conclusion proposes a change from established practices and beliefs. Additionally, the culture of a company or management style might favor qualitative insights over quantitative data.
To be a devil’s advocate, there are some valid reasons to be skeptical about A/B testing, which are -
It might tell you which variant is better, but you may not know why.
Extraneous variables such as seasonality might affect the results.
Results may not be reproducible.
When your A/B tests are questioned, you can handle the situation thoughtfully and proactively through the following ways :
Explain the value of A/B testing methodology and its role in identifying growth levers which can then be activated through other promotional channels. Cite examples from other companies who have deployed it successfully to make it more lucid
Clearly communicate the goals, methodology, and rationale behind the A/B test done in the current scenario
Address any potential biases in your test design or data collection methods. Transparently share both positive and negative results.
Encourage feedback and keep conversations objective and aligned with the main aim of the test.
Proactively address the valid sources of skepticism mentioned above. I highly recommend a follow-up FGD to validate the A/B Test conclusion and to understand the ‘why’ behind the consumer’s choice
A healthy dose of skepticism and questions always drives us to produce more robust tests and thoughtful insights. So come what may, Always Be Testing!
To end on a fun note, this meme by Elena Verna perfectly captures the 5 stages of grief marketers go through!
P.S A list of a few of my favorite resources to learn more about A/B testing include: