Enter your visitor and conversion numbers below to find out if your test result is statistically significant
Without statistical significance, you cannot tell whether your result is a real effect or random variation. Declaring a winner too early — before reaching significance — is called "peeking" and leads to false positives that can hurt your business when you ship the losing variant.
95% confidence means there is only a 5% probability that the observed difference between variants happened by chance. In other words, if you ran this test 100 times under the same conditions, the result would go the same way at least 95 times.
Be careful. A small sample can still produce a "significant" result by chance. Always use the sample size calculator before starting your test to ensure you collect enough data. A result that reaches significance with only 200 visitors is suspect; one with 5,000 visitors is far more reliable.
If you have already hit your target sample size and the result is not significant, the most likely explanation is that the change had little or no real effect. Extending the test hoping for significance is a form of p-hacking and leads to unreliable conclusions. Consider redesigning the test with a bolder change instead.
Mida is 10X faster than anything you have ever considered. Try it yourself.