A/B testing is a scientific approach to conversion optimization. Instead of guessing, you make decisions based on data. In this article, we'll break down how to run tests correctly.
What is an A/B Test?
An A/B test (split test) is an experiment where you show two groups of users different versions of a page and compare results.
- Control (A): current version
- Variant (B): version with changes
A/B Testing Process
1. Formulating a Hypothesis
A good hypothesis has this structure:
"If we [change], then [metric] will increase by [X%],
because [reasoning]."
Example:
"If we change the button color from blue to green,
CTR will increase by 15%, because green is associated with action."
2. Choosing Metrics
- Primary metric: main indicator (conversion, CTR)
- Secondary metrics: additional (bounce rate, time on site)
- Guardrail metrics: control (revenue, number of purchases)
3. Calculating Sample Size
Sufficient traffic is needed for statistical significance:
Parameters:
- Baseline conversion: 3%
- Minimum effect: 10% relative (from 3% to 3.3%)
- Statistical power: 80%
- Significance level: 95%
Result: ~30,000 visitors per variant
The biggest mistake is stopping the test when you see a "winner". Wait for statistical significance!
— A/B testing rule
4. Running the Test
- User randomization
- Even traffic distribution (50/50)
- Minimum 1-2 weeks (to cover different days)
5. Analyzing Results
Check:
- p-value < 0.05 — statistical significance
- Confidence interval — doesn't cross 0
- Sample ratio mismatch — 50/50 distribution
What to Test?
- CTA buttons (color, text, size)
- Headlines and product descriptions
- Images (lifestyle vs product shot)
- Forms (number of fields)
- Prices and discounts
- Navigation and UX
Tools
- Google Optimize — free, GA integration
- VWO — powerful, visual editor
- Optimizely — enterprise level
Common Mistakes
- Stopping the test too early
- Testing too many changes at once
- Ignoring segments (mobile vs desktop)
- Lack of test documentation
Conclusion
A/B testing is a marathon, not a sprint. Create a testing culture in your team, document all experiments, and make data-driven decisions.