What is Minimum Detectable Effect (MDE)?
Minimum detectable effect is the smallest difference in performance between two test variations that your experiment is designed to reliably detect. It is set during the test planning phase and directly determines the sample size required to achieve statistical significance.
For example, if your current conversion rate is 3% and you set an MDE of 10% relative (meaning you want to detect a change from 3.0% to 3.3% or higher), you will need a specific sample size to confidently identify that difference. If you set the MDE at 5% relative (detecting a change from 3.0% to 3.15%), you will need roughly four times as many visitors.
How MDE relates to sample size
The relationship is inverse and nonlinear. Halving the MDE roughly quadruples the required sample size. This is the fundamental trade-off in test planning:
- Small MDE (e.g., 2-5% relative) — Detects subtle improvements but requires very high traffic and long test durations. Practical only for sites with hundreds of thousands of monthly visitors.
- Large MDE (e.g., 15-30% relative) — Requires less traffic but can only detect large changes. Useful for radical redesigns or tests on lower-traffic pages.
- Moderate MDE (e.g., 5-15% relative) — The sweet spot for most eCommerce and SaaS sites, balancing sensitivity with practical test durations.
Why it matters for eCommerce and SaaS
MDE is the bridge between test ambition and test reality. Without setting it explicitly, teams either run tests that are too short to detect real effects (producing false negatives) or test for impossibly small effects that require months of runtime.
For eCommerce businesses, setting MDE correctly prevents two costly mistakes: (1) calling a test inconclusive when it actually had a meaningful but small effect, and (2) running a test for 3 months when a 2-week test would have been sufficient to detect the expected lift.
For SaaS businesses with lower traffic volumes, MDE planning is even more critical. A pricing page test on a site with 5,000 monthly visitors may only be able to detect effects of 20%+ relative — which means the test hypothesis should be bold enough to produce that kind of lift.
How to choose the right MDE
- Start with the business impact — Calculate what a given percentage lift in conversion rate would mean in monthly revenue. If a 5% relative lift equals $10,000/month, is that worth testing for?
- Check your traffic — Use a sample size calculator to see how long the test would need to run at your chosen MDE. If the answer is longer than 4-6 weeks, consider increasing the MDE.
- Match MDE to hypothesis strength — Bold changes (new layouts, different pricing structures) justify larger MDEs. Small tweaks (button colors, copy changes) may require smaller MDEs, which means they need higher-traffic pages.
- Account for seasonality — Ensure the test duration does not span a major traffic shift (holidays, promotions) that would contaminate results.
How acceleroi approaches it
At acceleroi, we calculate MDE during the test planning phase for every A/B test. We pair it with a revenue impact estimate so clients understand the business significance of the effect we are testing for. This prevents the common trap of running tests that are statistically interesting but commercially irrelevant — and ensures we only commit test slots to hypotheses where the expected lift is worth detecting.
Related resources
- Get a free CRO audit to understand what effect sizes are realistic for your traffic levels
- Read our blog for guides on A/B test planning