A/B testing is a statistical methodology of comparing different versions of something to see which version performs better. While A/B testing is a scientific approach to problem solving that has been in use for nearly a century, the technique was first adopted by marketers as early as the 1960’s in direct marketing campaigns. But with the rise of the digital era, A/B testing has surged in popularity, partly because launching and analyzing experiments in the online world is relatively easy.
In the context of digital marketing, conversion-obsessed marketers will test different versions of webpages, email headlines, landing pages, ad copy and other user-facing online content to determine which performs better. There are many webpage components that can be tested for performance, including page layout, menu location, headlines, CTAs, images, fonts, colors, image sizes… the list is nearly endless. Results and conversions can differ significantly depending on the combination of elements and the audience.
Why A/B Test?
A/B testing provides a definitive, data-driven approach to determining which version of online content performs better, that is both statistically valid and scientifically sound. In short, A/B testing takes the guesswork out of marketing, replacing subjective decision making with an objective framework for determining winners and losers.
Because the guesswork has been reduced or eliminated, marketers and business managers can consistently improve the results and efficiency of their marketing efforts, or business operations, over time by employing a systematic approach to A/B testing. Often the result of ongoing A/B testing and experimentation is a dramatic improvement in marketing effectiveness and advertising ROI, and sometimes is the difference between the success and failure of a marketing campaign or even a business.
There are many A/B testing tools and software solutions today that can help with launching a successful program of experimentation. These testing tools can help marketers sift through the myriad of attributes, data, and options that can otherwise make A/B testing difficult or cumbersome. Their cost and complexity can vary tremendously depending on the size of your website and your firm’s needs.
In addition to learning and understanding the tools available for proper A/B testing, you’ll also need to have a solid understanding of the statistical principles that underpin all of A/B testing. Without a foundational understanding of how to statistically interpret results, you’ll likely encounter errors and make unreliable decisions. Let’s break down the three most important statistical terms you’ll become acquainted with along the way:
- Mean – The mean (or average) is a measure for determining how each variable we test results in something. You’ll want to tabulate the mean click rate or conversion rate depending on what you’re testing.
- Variance – The variance is used to determine the variability of the data being measured. If the variance is low, the more accurate the mean will be. Likewise, if there’s a large variance in what you’re testing, the confidence interval of the sample mean will be larger and less accurate as a measure.
- Sampling – In order for the data to be statistically meaningful, there needs to be a large enough sample size. If we test only a handful of interactions with a particular website test, the sampling might not be large enough to have significance.
Determining the statistical significance of an A/B test is critical, because it is the statistical validity that gives A/B testing its prescriptive power. Without statistically significant results, marketers are at risk of making either Type I (false positive) or Type II (false negative) errors and misinterpreting the results of their tests.
A/B Testing Challenges on Your Website
While A/B testing can play an integral and even essential role in driving marketing results, it is important to acknowledge the inherent challenges of A/B testing, especially for websites, that have led marketers to adopt more advanced ways of intelligently optimizing their website content.
- A/B testing takes too long. With A/B testing, you test one variation at a time. Testing more than one at a time can confound the results of both experiments, undermining the statistical significance and defeating the point of the controlled experiment. And, depending on the amount of traffic your website gets, the results can take time – weeks, perhaps months, and far too often, a clear winner is never identified. In a business environment that often demands results this week, A/B testing can sometimes fall by the wayside.
- A/B testing ignores smaller segments. A/B testing doesn’t take into consideration each unique visitor, but rather groups visitors into large randomly assigned segments. If one variation outperforms another 60% to 40% (as an example) the winning variation may be more effective for a majority of your website visitors, but there may still be a segment in your overall population that would respond better to the losing variation. In A/B testing, no allowance is made for serving different variations to different segments over time, instead a winning variation is chosen and served to all future web visitors.
- A/B testing takes a lot of work. A/B testing requires marketers to closely monitor metrics, measure improvements, and update the website with new testing variations. This often requires the resources of web developers, programmers, graphic designers, possibly legal and regulatory approval groups within an organization to make site changes.
- A/B testing misses opportunities. An A/B test gathers data by serving one variation that will (hopefully) win and one version that will (presumably) lose to 50% of your selected website traffic users each. While the experiment is gathering data, your website is missing out on conversions from the half of your audience that is viewing the future losing variation. Over time, these missed opportunities will add up. And while A/B testing is better than not testing, A/B testing lacks the efficiency that is delivered by today’s machine learning and artificial intelligence solutions.
- Statistical Complexity. Without a deep understanding of statistical principles, marketers can often come to false conclusions by mis-interpreting the results of an A/B test. Ensuring experimental validity, reaching statistical significance and analyzing statistical confidence & power can be a high bar that trips up even the most experienced conversion optimizers.
Introducing Continuous Conversion™
Intellimize is pioneering new advancements in the industry by introducing Continuous Conversion™, a machine learning optimization approach that outperforms A/B testing. While A/B testing is cumbersome and slow, Continuous Conversion™ is fast and efficient, delivering head-turning results in days not weeks.
Intellimize uses machine learning to optimize the individual steps of each buyer’s unique journey in real time and automatically adjusts web content in response to changes in the buyer behavior over time, delivering better conversion results as much as 25x faster, empowering marketers to test more ideas, faster.
The Results of Continuous Conversion™
Intelligent website optimization can help improve lift and conversions for websites across a host of industries. Here are some use cases of how machine learning can power your website performance improvements.
- B2B – See how Snowflake drove a 49% uplift in meetings booked using Continuous Conversion™ for their website.
- B2C eCommerce – See how Stella & Dot achieved a 52% increase in shopping cart conversions by testing 400 combinations of their checkout page in just a few months.
Recommended Resources – Intelligent Website Automation vs. AB Testing
Here are recommended resources on how and why Intelligent Website Automation outperforms A/B testing.
- Video: Which is better for me? A/B testing, rules-based personalization, or predictive personalization?