A/B testing, also known as split testing, is an experiment used by brands and webmasters alike to test whether making changes to a page on their site can improve performance. Essentially, the test involves duplicating a webpage; keeping one page the same and making changes to the additional page. The original version (Control) and the new version (Variant) of the page will be served to users at random but in equal measures to determine which changes improve the overall effectiveness of the new variant page. In this article, we’ll explore the testing process in more detail.
From the outset, it’s important to understand which areas of your website need optimising. If you’re at the point of considering A/B testing, you may already have an assumption of where the areas are. However, if this isn’t the case, almost all web analytics platforms (eg. Google Analytics) will be able to provide further insight to highlight which areas need improving.
In order to focus your data gathering, you should concentrate on pages with high traffic volumes as this is where you will see the most benefit. Look out for things like low conversion rates or high bounce/abandonment rates on high traffic pages and start there.
Once you’ve realised which page to test and the metric(s) you’re hoping to improve, you now need to decide which element on the page may influence that metric.
Here at Zazzle Media, we call this the PPM method: Problem / Proposal / Measure .
If you’re looking to improve the click through rate (CTR) of a call to action or button this will be the Problem you’re trying to resolve. To fix this issue, you Propose that a change in colour, size or even a different position on the page will improve performance. Once you’ve identified the Problem and your Proposal to improve it, you will also need to outline how you will Measure the performance of the change.
Problem: Click-through rate on ‘order now’ button is only 2%
Proposal: Change the colour of the button to green
Measure of success: CTR increases
This now forms the foundation of your hypothesis and there is no limit to how many hypotheses you can have, and many will crossover.
Once you have a clear understanding of which hypotheses you would like to test, you’ll need to prioritise which tests will have the most significant impact - this could be the most valuable revenue driving page, or the highest traffic, biggest entrance page, whichever aligns to your business goals best.
Simply arrange them in order and keep any that don’t make the cut. Keeping a list of all potential experiments is really important as you might decide to return to them at a later date if needed.
The next step is to create your experiment as defined in your hypothesis. You will need to employ a specialist A/B testing platform and, at this point, you’ll need to do some research to find out which platform is best tailored to your needs. Google Optimize, Optimizely and VWO are all great options that include excellent visual page editors that allow you to easily duplicate pages and make changes to your page variations.
Whilst you’re creating your experiments, you can also set other things like defining which devices you would like to target or perhaps customising the split of traffic (eg. 70% to control and 30% to variant).
If you are using a tool similar to those mentioned above, you’ll be able to run and observe the experiment on this platform. Keeping everything in one place should lead to an organised testing period. It’s crucial that you keep the test running long enough to serve enough people (traffic) as this will produce a strong sample of data to make the results reliable and representative – this is known as statistical relevance. To read more on this, Optimizely have a great description in their Glossary.
Now for the fun part; once you have reached statistical relevance, you can analyse the results. In an ideal world, the results will lead you to realise your goals, i.e. the variant has shown an improvement in CTR. At the very least the test will point you in the direction of other areas that could benefit from testing.
If you have seen a positive impact from the changes you have made then you may decide to roll this out to other areas of the website that you think could benefit from the same treatment.
It’s important to add that A/B testing should not just be a one-off project – it’s an ongoing activity to really squeeze out any conversion improvements opportunities across the site. Even the smallest margins can make a big difference on the bottom line (as seen in some of our case studies) and the changes are almost endless.
The overall objective of A/B testing is to make it easier for a user visiting your website to follow the journey that you want them to take. Therefore, the potential benefits of split testing are huge and ultimately driven by your internal business objectives. The below list suggests a number of possible benefits;
To see the potential benefits of A/B testing in action, take a look at an important project we did with one of our clients, Molly Maid. The brief was to increase enquiries and the results were fantastic – you can read more on this here.
Whilst A/B testing focuses predominantly on the user’s experience, testing anything large and small to try and squeeze that extra point to conversion figures, it is important to always have SEO at the forefront of your mind.
Below there are nine issues you must consider when running an A/B test in order to stay clear of potential problems such as avoiding penalties and preserving hard earned rankings.
As with any tests, an A/B test is only successful when it achieves statistical significance; i.e. it is highly probable that the results would occur again. Whilst the amount of time required for a test to reach statistical significance will vary depending on a number of variables, one thing is certain; once you have gathered enough information you must end the test, period.
Once your data supports a winning variation, you should remove all variations and implement the winning test onto the original URL. In the case that no clear winner is found during a set time, turn the test off and leave the original URL alone. Running a lengthy split test may start to appear manipulative to Google, which in turn will risk a rankings penalty.
When running an A/B test, a user will typically land on the original URL only to then be redirected to a new URL variation (if they fall into one of the test groups) – this is standard practice as you typically want to preserve the original URL as your control.
Always use a 302 (temporary) redirect when redirecting users in the test pot. Using a 301 redirect signals to Google that the original page has been removed and replaced by the test page; this is not suitable for A/B testing as the test page is only temporary and will be deleted once the test has concluded. A 302 redirect will ensure the original page does not pass any link juice to the temporary pages, which will make sure it maintains its indexation status during the testing period.
Whilst call to actions, button text and colours are all suitable testing variable, altering H1s, body text, or other on-page ranking factors can have a negative effect on your pages’ organic rankings.
Best practice for testing should all be trying to improve your under performing pages. Performing whole scale changes to your high value pages is testing suicide for the obvious reasons that a failed test will lose you a large amount of leads and conversions. Whilst you might think that testing H1 tag variations might raise your conversion by a point of a percent, that doesn’t matter when you lose rank and as such, traffic. It is estimated that for every position lost in rankings, your organic clicks will half.
If in doubt, if you have a high value page perfectly optimised for SEO then it may be best to leave it alone.
Originally a black-hat technique, showing one version of your website to search engines and another to regular visitors is called cloaking. Whilst it may be tempting to implement when testing, perhaps with the idea of showing the original version to try to maintain organic rankings, cloaking is in direct breach of Google’s webmaster guidelines and should never be implemented, regardless of if you’re running a test or not.
When running a test which displays multiple variations of a single webpage, make sure you’re not segmenting your traffic based on user-agent. Ensure that Googlebot has the same experience as that of a regular visitor to avoid any nasty penalties.
To ensure the original page remains the sole indexed page within the test group, just implement the next two strategies.
To avoid a duplicate content issue you will want to specify to Googlebot not to index the newly created test pages. This can be done by either specifying the individual page not being indexed within the head of the document, or adding a directive to the site's robots.txt file, specifying the test group of pages not to be followed.
When dealing with A/B testing however, you should always avoid mentioning test pages within your robots.txt file. Googlebot will not be able to follow any internal link to the blocked pages whilst it is crawling a website, but it can still land on the blocked page by following inbound links that may be created - thus raising the possibility of the blocked pages becoming indexed.
Tests do not occur in the void and inbound links can always be created without your knowledge. For example, someone, after attempting to access the original page, may be redirected to the test page. Not thinking, they bookmark or copy down the page's URL and add to their blog. This link can then be followed by a search spider, causing it to be indexed.
Adding a noindex meta robots tag into the head of the test page will prevent this from occurring as no matter the path which the robot has arrived on, it will read the meta robot directive on the page, preventing its indexation. The more specific the directive is always followed; so whilst an inbound link will beat a site-wide directive, a page specific link will always beat an inbound link.
Meta robots should not be used in isolation, however. You may specify all test versions of the page be noindexed; however, Google may mistakenly interpret the original page as a duplicate, and through all versions out of the index.
You want to minimize the amount of repetitive content on your site. Pages of near duplicate content could - at worst - result in a rankings penalty or at best cause Google to choose which page to index over the other. This could be very damaging to your previous SEO efforts as the new test page could become indexed over your original page. Imagine appending months gaining quality links to a page only to be wasted due to improper testing planning!
To prevent this, indicate your preferred version of the page you are testing with the rel canonical tag. The rel canonical tag must be placed on every test page and must all point back to the original URL. This serves to consolidate your rankings to the original page, thus maintaining your previous SEO efforts.
Using the rel canonical tag on all variations of the page will help Googlebot understand the test URLs are near duplicates of the original URL and do not require indexation. Rel canonicals work in synchronisation with your noindex meta robots to ensure only the original page is indexed.
This point follows on from point one. Once you have complete achieved statistical significance, some might be tempted to run the winning version to 100% of users for a period of time, just to reinforce your previous test results. Doing this will cause all users attempting to visit the original page being redirected to the test URL.
This is never recommended as it may cause Google to index the test page even if you follow the correct meta robot and canonical practices. If the test page is indexed it probably won't have the same ranking potential of the old page (less links, not linked directly from internal pages). Plus, when you remove this page and implement the winning features onto the original page, that one won't be ranking anymore and you will lose rank as a result.
Once a test has reached statistical significance, implement it onto the original URL and delete all traces of the test pages from your site, 301 redirecting the test page URLs back to the original page.
Once you have found your winning version you will need to update the original page with the best version then erase all digital signs of your failed versions immediately. This will prevent Google for finding one of the versions at a later point and indexing it. 301 the dead pages back to the winning version to ensure Google never finds any trace of the failed test URLs.
Whilst it is common to test on-page factors to try to increase conversion, you cannot A/B test your on-page attributes for SEO. As there is only one Googlebot you cannot split its visits to two versions of a page. Doing so will only cause Googlebot to see the pages as near-duplicates and throw one (or both) out of the index. Even ignoring this issue, Googlebot will consider the age of the page, its prominence within the site's internal linking structure, as well as the pages’ current performance within its assessment of the page. All these factors make it impossible for us to perform a traditional A/B test on SEO elements of a page.
The above demonstrates the importance of always keeping SEO at the front of your mind. Even a task such as CRO, which does seem very distinct from SEO, must always consider the impact it will have. The big takeaway from this is to always inform your SEO team or agency whenever you are conducting A/B testing or, better still, have them involved from the outset.
Sign up for our monthly newsletter and follow us on social media for the latest news.