Every fibre of your being tells you that your split test results are wrong and the variation you’ve been running should be converting better than the original.
You’ve not just changed some small bits of copy or reorganised pre-existing content, you’ve built something brand new or included some new feature to help your users. But your variation is failing; hugely. But what can you do next?
Your test has failed, give up and more on, right? That’s fine if you’re testing something simple like a new Add to Basket button but it’s not really an option is you’re testing something more complex like a brand new feature.
There are times we can get our research wrong, misjudge our audience and give them something they do not use or want. We could walk away at this point and accept the loss but what happens on your next split test?
In having a test fail you’ve proved that your research might be lacking and if you just give up and move on to the next test you could be doomed to make the same mistakes again based on the same ineffective research.
Instead you need to review the failed variation and try to understand what caused the test to fail, only then can you learn from the test and allow that to inform your future tests, which might include re-testing your failed attempt.
The way conversion rate is calculated is as a percentage of conversions over the number of unique visits. So 1 conversion in every 100 visits would return a conversion rate of 1%. Simple.
If either of these figures is inflated in any way by you or a client then the results of the split test will be not be accurate. Visiting the website might only have a small effect on the final results of a split test but if the website is utilised for internal ordering, such as handling telephone orders then these extra conversions on a single variation could have a large impact.
Fixing the problem might be as simple as not including the code when the website is viewed from a specific IP address, but this requires these users to have a fixed IP which isn’t always the case. Alternative methods could include constructing some form of URL that identifies an internal user or use an existing member account to the same effect.
Sometimes sourcing a few more opinions might be all it takes to shed some light onto the problem. As website owners and creators we can become a little blinded to websites we’ve worked on for some time. While experience gives us knowledge and insight that’s invaluable it can be difficult to have the same perspective as the consumer.
Asking people with less experience of the website for help might uncover some issue and getting opinions this way is always worth doing before shelling out on paid for services. You might just find the answer is something simple and getting opinions in this way might identify a problem much quicker.
It’s obvious but looking at how your users are interacting with the page could reveal the answer to the problem. This sort of data is often gathered in the research phases before a split test is created but if your test is set up right you may be able to look at the data specific to the new variation rather than the original website.
Understanding analytics information especially the referral information will give insight into why people came to the page, while bounce rates and time on page show if they stuck around to digest the content and then page speed will show if something as simple as slow page load might be holding you back.
Heatmaps and scrollmaps will show you how they’ve moved around the page and what elements they click. You might find that an element you want them to click isn’t getting any attention at all and instead the user is navigating away from this page and towards the home page or search bar which can be an indication that the user is lost.
Analytics and heatmaps produce data, and lots of it. But its quantitative data which is all numbers and measurements with a human needed to interpret the data, observe trends and make judgements as to what it means in a given situation.
But usability tests provide qualitative data, which gives observations that have more meaning. Often this comes in lower quantities so it’s difficult to plot any trends in the data but does give an insight into the thoughts of the consumer. There are a few different types of usability tests but mostly we’d use the 5 Second Test and User Testing Videos.
With the 5 Second Test you show an image for 5 seconds and the tester answers some follow up questions. It usually allows you to get more results for your money but the testers do not interact with your website. If your split test involves more than testing a new look this form of usability testing isn’t likely to offer much insight.
User Testing Videos often means sitting for a few hours watching people navigate through a website tripping up on things you take for granted. It’s a great way of testing a website and is something to consider doing regularly as the website evolves. It’s a fascinating experience to watch how people use your website but you have to consider that if one person has a problem it may not mean all your audience experiences the same issue. You have to use your own expertise to interpret what’s happening and prioritise any issues.
As a sidenote getting the brief right for these kinds of tests is important. Make it short and don’t try and lead the user too much, let them do their own thing so long as you make sure they visit any page you’re wanting them to view. Remember people have a habit of ignoring instructions so make it simple.
Now you’ve had chance to research how your users are experiencing your tested variation, and you’ve made sure you and your clients aren’t skewing the results you’ll have a very easy decision to make on what to do next.
Either you’ve identified a problem, fixed it and can re-run the split test or you’ve found that the test wasn’t giving the users what they wanted and there’s no advantage in continuing with the original hypothesis your split test was based on. Either way you’ve learnt more about the website and its users and so can move on to other split tests with greater insight.
But in some rare occasions you might find that the test failed and none of the extra research provided a good reason as to why. You might want to try and run the test again, and for longer to be really, really sure but if that doesn’t give a positive result it’s time to move on and see if any of the new data you’ve gathered helps you come up with new split tests in other areas.
Don’t be quick to rush into a decision about whether a variation has won or lost. Many split testing services will let you know when a variation has won and the numbers are large enough to back this up.
If your split testing software does show a winner within the first couple of weeks of the test don’t listen to it. Often tests results can vary wildly when not much data has been collected and two weeks is rarely long enough unless you’re split testing a high traffic website.
I’ve already mentioned about testing a new feature other than remixing or redesigning something that already exists. If you’re not seeing positive results or just small increases then be bolder in what you’re testing.
It’s true that in certain cases there are big uplifts in conversion to be had from small alterations in design or copy and small increases in conversion rate can have big benefits for websites with big turnovers. But in some cases, especially with smaller websites, adding a new page or feature that provides users more choice or enhances the user experience can be a better way to approach split testing.