Create A/B tests by chatting with AI and launch them on your website within minutes.

Try it for FREE now
No credit cards

Multiple Testing

Multiple Testing is a statistical challenge that occurs when conducting multiple simultaneous hypothesis tests or comparisons, increasing the probability of finding false positive results purely by chance.

Meaning & Context

When running multiple A/B tests concurrently or comparing multiple variants against a control, the risk of Type I errors (false positives) compounds with each additional comparison. For example, using a 95% confidence level means a 5% chance of false positive per test; running 20 tests simultaneously gives approximately a 64% chance of at least one false positive. Statistical corrections like Bonferroni, Benjamini-Hochberg, or sequential testing methods help control for this inflation of error rates.

Why It Matters

Multiple testing problems can lead optimization teams to implement changes based on spurious results, wasting development resources and potentially harming user experience or revenue. Without proper statistical corrections, the more experiments a team runs, the more likely they are to make incorrect decisions. Understanding and accounting for multiple testing is essential for maintaining the integrity of an experimentation program, especially for teams running dozens or hundreds of tests annually.

Example

A marketing team runs 15 simultaneous A/B tests on different page elements, finding three 'statistically significant winners' at p<0.05. After applying Bonferroni correction to account for multiple testing, only one of the three remains significant, preventing them from implementing two changes that would have likely had no real impact or potentially harmed conversions.

Download our free 100 point Ecommerce CRO Checklist

This comprehensive checklist covers all critical pages, from homepage to checkout, giving you actionable steps to boost sales and revenue.