Do You Know What is A/B testing?
…and how does it increase conversion?
A/B testing, or split testing, is a method of experimental marketing research in which the control option “A” is compared with the test option “B”.
The goal of the experiment is to determine which option has the greatest impact on business indicators. Such indicators can be: increased sales on the site, reader engagement, improved lead quality, or many others.
For example, during testing, the audience that goes to the site is divided into two parts, usually 50/50. One part sees site A, and the second part of the audience sees the same site, but with a change in the tested hypothesis, and this is version B.
Why conduct A/B testing
For example, we have an online store selling jewelry. We set up a sales funnel, and drive traffic to the site, but we have a low conversion rate to purchase – people add goods to the cart, and then leave the site without buying anything. We are looking for a problem — we hypothesize that our payment form on the website is too complicated and needs to be replaced. But how can we understand that users are refusing precisely because of an inconvenient payment form? After all, there may be other reasons: uncompetitive prices, bad reviews about the company, or something else. An A/B test will show whether changing the form will have a positive effect on business indicators.
The point of conducting an A/B test is to compare
What is tested using A/B tests
- conversion of landing pages;
- new design and layout elements on the website;
- site navigation;
- conversion of headings and subheadings in advertisements;
- feedback form;
- call to action buttons, for example, “subscribe to the newsletter” or “subscribe”;
- subscription topic names;
- advertising campaigns (creatives, formats, targeting);
- elements within the product;
- system panels;
- onboarding screens.
Conversion rate
In A/B testing, the ratio of the number of site visitors who performed certain target actions to the total number of site visitors is measured. The coefficient is expressed as a percentage.
Which target actions need to be taken into account depends on the test objectives. If the experiment hypotheses are confirmed, A/B tests will help optimize conversion.
Setting Up Google Analytics E-сommerce
When A/B testing is better not to do
There are situations when A/B testing will not work. Here is a list of recommendations from Octoix on this topic:
Don’t conduct A/B testing when there is no significant traffic yet
Don’t jump into the deep pool of A/B testing while there is ankle-deep water, that is, if there is no significant user traffic. To identify the preferences of the average user, you need a statistically significant sample size. If there is no resource to obtain adequate data, A/B tests will not show results.
Don’t conduct A/B testing if there is no reference point
Let’s say we recently launched a landing page, and there is no data on the necessary metrics yet. Or we do not have data on metrics because counters were not installed on the site (yes, this also happens), and we cannot determine the reference point in the A/B test.
To avoid missing important indicators, in this case, testing should be carried out for several months. And this contradicts the essence of A/B testing — after all, A/B tests are a tool for quickly implementing improvements.
If there is no data on the metrics that reflect the current situation, there is no point in A/B testing.
Don’t A/B test minor changes
If the change is small and insignificant, we won’t even be able to understand the result of the experiment. Or we will be forced to conduct additional tests to collect data and spend time and resources, for example, on moving a button a couple of pixels in the interface.
Don’t A/B test if there is no well-founded hypothesis
Conducting A/B tests should be treated as a real science — a good scientist never starts an experiment without a well-developed hypothesis.
If the hypothesis is raw, it is better to go back to finding the problem. In practice, the pain point may not be where we think it is. The answer to the question of how to improve the product may not necessarily be in the A/B testing method.
Don’t A/B test if you know for sure that the improvements will work
A/B tests should be skipped when there is confidence that the change project will almost certainly improve the product, and the risks associated with implementing the idea are small. In this case, we move on to implementation.
Step-by-step A/B testing
Let’s break down the work with A/B testing into stages using the example of a company buying a house.
Step 1. Define the problem
Clearly describe what goal we are pursuing. Most often, businesses are concerned about customer churn at some stage of the sales funnel.
Step 2. Formulate a hypothesis
Based on the problem, we build a hypothesis.
A hypothesis is an assumption about how the state of the product can change if one of its elements is changed. A hypothesis specifies a solution that will change the situation, as well as the indicators that will improve as a result of the change.
Step 3. Choose one main metric that we will measure
In A/B testing, different metrics are used, depending on the hypothesis: increase in purchases, orders, increase in average check, percentage of user return, first purchase conversion, and much more.
For a house purchase company, we choose the number of applications left as the A/B testing metric.
Step 4. Choose a single test variable
In A/B testing, you can test headlines, layouts, or images. It is important to choose only one element to test. You may be tempted to change several elements, but this will greatly dilute the test results.
Step 5. Create separate pages A (“control”) and B (“challenger”)
To test the hypothesis, you need one unchanged page (A) and one page with a changed element (B).
Step 6. Create equal random testing groups
User testing groups are randomly selected and, as a rule, have the same size and general demographic characteristics. An A/B test divides the audience into two groups 50/50.
Thanks to the random sampling, each user has an equal chance of seeing either version A or version B. The test audiences should not be aware that an A/B test is being conducted, as this can subconsciously influence their reaction.
An important condition: a single Google Analytics counter and a single goal for recording conversions must be installed on both the site and the quiz.
Step 7. Calculate the number of users for testing – make samples
The sample size should be such that it will provide statistically significant data on the audience’s reaction
Step 8. Determine the statistical significance of the experiment
Statistical significance is the percentage of confidence that the data was not a simple coincidence. Significance is determined and set manually, depending on the importance and complexity of the experiment.
Significance levels of 90%, 95% and 99% are often used. It is generally accepted to take the significance level of 95%. The idea is that out of 100 users, 10%, 5%, or 1% made a choice by chance. If we test a large enough group of users, we will determine without error what the average user prefers.
Step 9. Conduct A/A testing
Before the A/B test, an A/A test is conducted to check the user groups’ homogeneity, the test settings, and the initial conversion. In the A/A test, version A is compared with version A.
This is how we check the testing tool to exclude technical errors. If the A/A test showed changes – and we know that there should be none – then we need to return to the test settings. We check:
Do the data in the service match the web analytics data (number of visitors, conversion);
That the variants load at the same speed – changing the variant, even to the same one, slows down the site loading and affects the conversion;
So that both pages look the same on all devices and in all browsers.
If the A/A test did not show a favorite, then everything is ok with the settings, you can run the A/B test.
Step 10. Let’s start A/B testing
To get a clear idea of the results of the A/B test, both variants should be tested simultaneously, provided that the sample size of visitors is the same.
Suppose we launch different variants one after another. In that case, we will not know whether the results are related to changes in content or simple fluctuations in interest due to the season or other reasons.
Step 11. Give the test enough time to yield results
We recommend checking the test for errors after 1–2 days but do not evaluate the results obtained, since they do not yet contain in-depth information. Google, for example, recommends testing for at least two weeks.
Step 12. View the results using special services. We can use Google Analytics
Step 13. Analyze the test data
Draw your own conclusions based on the hypothesis. Consider proxy metrics, that is, indicators that also changed following the main metrics.
What to change during A/B testing
You cannot make changes during testing!!!
In the final stage, we will track the reliability and effectiveness of each option compared to the other and analyze the results.
Assess the reliability of the data obtained
Determine whether the results of the A/B test are due to changes in option B or randomness.
Look at the effectiveness of the changes
For example, according to the results of the A/B test, the actual conversion rate increased by 1%, as we assumed in the hypothesis. As a result of the selected changes, the metric increased, and the hypothesis was confirmed. But the opposite situation also happens when the change does not affect the key metric. We conclude that the hypothesis was not confirmed and collect all the results for further testing analysis.
Analyze the data and make the necessary decision
Now you need to decide: make all the changes or test a new hypothesis.
Documenting the results and highlighting the positive and negative aspects is important. These documents will form the basis for conducting quick and productive brainstorming sessions in the company.
Conclusion
A/B testing helps make decisions in companies that are focused on specific data. As a rule, test results play an important role when deciding to change a new product’s design or a business strategy’s parameters. A/B testing will help bring designers, developers, and business owners together to decide how to make the product better for the client and confirm intuitive guesses with real numbers.