All Resources
A/B Testing
All
Blog

When Should I End My A/B test?

Your site’s audience is not static.

It’s risky to assume the A/B test sample you run today will be representative of your audience tomorrow.

A/B tests take a lot of ongoing management from marketers. You need to monitor your experiment, ensure variations are working, and—perhaps most nerve wracking of all—decide when to call a winner and end the test. A lot has been written about ending A/B tests (like this article and this post), and there are many opinions about the best approach.

We believe this is a question you shouldn’t need to answer in the first place. Why? First, your audience isn’t static. The “right answer” for today may be wrong for tomorrow’s audience. Second, A/B tests rely on samples that may not be representative of your entire audience.

Most A/B tests aren’t representative because your audience is constantly changing

A/B tests are designed to make inferences about audience behaviors. They optimize for your site’s audience during the time of the test. However, your audience changes constantly and is influenced by your marketing efforts, your competitors, seasonal effects, and other random factors. An A/B test takes a snapshot of your audience without regard for how the behavior of your audience might change over time. At best, you answer the question “what was the best option then?”

A/B tests also rely on the assumption that the audience sampled during the test is representative of your audience outside the scope of the test. Taking a valid sample of your audience requires attention to detail and a degree of judgement. For example, while you can calculate the sample size required for a valid test, your sampling window should include at least a full business cycle (such as a full week) to ensure you include all types of behavior (such as weekday and weekend behavior). However, you don’t want your sampling period to be too long because it increases the likelihood of including nonrandom bias, like a change to your marketing campaign. Every sample has some risk of bias.

3 ways predictive personalization eliminates the sampling problem

  • First, predictive personalization ensures that ideas are tested continuously. The system reacts automatically to changes in your audience’s behavior over time. Predictive personalization will optimize for the best performance even when your audience and the optimal answer keep changing. You do not need to constantly monitor your experiment or regularly intervene, freeing you up to conduct more experiments, learn and iterate to accelerate performance improvements.
  • Second, your ideas are run with your entire audience instead of a sample. This approach minimizes sampling errors because you are continuously sampling all visitors to your site. A predictive personalization system optimizes for conversions rather than attaining statistical significance with a sample.
  • Finally, while testing ideas randomly is the right approach when you’re optimizing for statistical significance, predictive personalization will intentionally adjust traffic to your higher performing variations. Your performance will be driven by a real time view across your entire audience of which ideas are working better. Those better performing ideas will be shown more often, automatically.

Instead of asking when you should end your A/B test, ask yourself the question, “Is my audience really static?” If you’re not sure, or the answer is “no,” predictive personalization may help you achieve better results than A/B testing.

Recommended content

Browse More Blogs
No items found.