Campaigns
Nov 28, 2025
How Long Should I Run My A/B Test?
The real question isn't "when is my test done?" It's "how confident do I need to be for THIS decision?"
It depends on two things: how much data you have, and how big your change is.
But the real question isn't "when is my test done?" It's "how confident do I need to be for THIS decision?"
You're not waiting for a finish line. You're building confidence. And the level of confidence you need depends entirely on what's at stake and whether you can change your mind later.
What Actually Affects How Long Your Test Takes?
Two things determine timing: how much data and how big the change
Think about it like surveying people for an opinion. If you ask 10 people and 9 say yes, you're getting a signal. Ask 1,000 people and 900 say yes, now you're confident.
But there's a second factor that often gets overlooked: how different are the things you're comparing?
If you're testing a 2% price change, that tiny difference is hard to spot. You'll need a massive amount of data to be sure the difference isn't just random noise. But if you're testing a 20% price change, that bigger swing shows up much faster. You'll see clear patterns with far fewer visitors.
It's like a weather forecast. When the meteorologist says "85% chance of rain," you pack an umbrella. You don't need 100% certainty to make a good decision. And if it doesn't rain, you just carry an umbrella for nothing. No big deal.
One more factor: time. Run your test for at least two weeks. This captures weekday vs weekend shoppers, payday cycles, and different customer consideration windows. A test that runs Monday through Friday misses entirely different buying behavior.
A More Useful Question to Ask
Instead of "when is it significant?" ask "how confident do I need to be?"
It's tempting to treat test results like an achievement to unlock. You check obsessively, waiting for a magic moment when your test suddenly becomes "done."
This mindset comes from a few common patterns:
Waiting for a "done" signal. There isn't one. Confidence builds gradually, like election returns coming in. Early results bounce around. Eventually they stabilize.
Looking for a magic number of visitors. There isn't one. How much data you need depends on how big your change is and how confident you need to be.
Thinking 95% confidence is required. That threshold exists for medical trials where lives are at stake. Testing your prices isn't life or death.
Panicking when numbers fluctuate. Early results always fluctuate. That's normal, not broken. Keep the test running and watch for the pattern to stabilize.
These patterns come from academia where the stakes demanded certainty. E-commerce isn't academia. You can change your prices back.
Why Good Enough Is Good Enough
Most business decisions are two-way doors. You can walk back through.
What matters is whether you can change this decision later. If yes, you don't need 95% confidence. 80% is often plenty.
This episode dives deeper into confidence vs. certainty and why most business decisions don't require 95% confidence.
Think about it this way. Getting from 0% to 80% confidence happens relatively quickly. Getting from 80% to 95% takes exponentially longer. You might wait weeks for those extra 15 percentage points.
Every day you wait is a day you're not capturing the upside. Time has real value.
It's like baking. Taking bread out too early means it might not be done inside. But you can always put it back in. A price test is the same. If you make the wrong call at 80% confidence, you can change it back.
Ask yourself: Am I ready to decide?
How Do I Know When I'm Ready to Decide?
What's the worst case if I'm wrong?
A simple framework that replaces the "is it significant yet?" anxiety:
First, make sure you've run at least two weeks. This captures full weekly shopping patterns and exposes your test to different customer contexts. If you haven't hit two weeks yet, keep running.
Step 1: Look at your chance to win.
What's the probability that your variant beats control? If it's above 80%, you have a strong signal.
Step 2: Look at the range of possible outcomes.
Don't just look at the middle number. Look at the best case and worst case. Your analytics should show you this range.
Step 3: Ask yourself: "Am I okay with the worst case?"
If the worst realistic outcome is a 2% decrease in profit, and your best case is 15% increase, that's probably worth pursuing even at 75% confidence.
Step 4: Ask yourself: "Can I change this back if I'm wrong?"
Most pricing and shipping decisions are reversible. If you can undo the change, you don't need certainty.
Step 5: If yes to both, act.
Don't wait for certainty. Act when the downside is acceptable.
Common Mistakes That Keep You Waiting Too Long
Perfect confidence is the enemy of good decisions
Waiting for 95% on reversible decisions. That's overkill. If you can change your prices back tomorrow, 80% confidence is often enough. Save 95% for decisions you can't undo.
Checking obsessively. Looking at your test five times a day won't make it finish faster. Results fluctuate, especially early. That's statistics working as expected, not something broken.
Testing tiny changes. A 2% price increase is nearly impossible to detect with statistical confidence. Bigger changes show up faster. If you want faster results, take a bigger swing.
Ignoring the cost of waiting. Every day your test runs is a day you're not capturing the full upside of the winning variant. Time has real value.
Only looking at the middle number. The single number ("conversion is up 5%") hides important information. Look at the range. Even if you're "losing" in the middle, the worst case might be totally acceptable.
Ending before two weeks. You need at least two full weeks to see how weekday vs weekend shoppers behave, capture payday cycles, and account for different customer consideration windows. Cutting short means missing patterns.
Stop Waiting. Start Deciding.
You might be checking your tests obsessively, waiting for an "unlock" moment that never comes. You want someone to tell you it's safe to act.
Confidence is a spectrum, not a finish line. The right threshold depends on the stakes, not some arbitrary academic standard.
Make confident decisions:
Check your confidence level. Aim for 80%+ on reversible decisions, higher for permanent ones.
Look at the range of outcomes. Best case to worst case, not just the middle.
Ask if you can change it back. Most pricing decisions are two-way doors.
Consider the cost of waiting. Time has real value.
Act when downside is acceptable. Not when you're 100% certain.
Don't wait for certainty. Know when you're confident enough!
Ready to stop waiting and start deciding? When you're ready to make confident calls on your pricing and testing, let's get you testing beyond what's typical.
Expert Guide
AB Testing
Analytics







