NOW that’s what I call Diversity Bias!
Here’s how to conduct a Conversion Rate Optimisation (CRO) test like a real scientist. With this approach, you can assure your client that you’re conducting the right tests in the right way, to get the best return on their investment.
- Look for clues
- Make a hypothesis
- Choose a test type
- Results – draw a conclusion
- Try and disprove the hypothesis
- If you can’t disprove the hypothesis, accept it for the time being
- Repeat, Evolve, Implement
But before we get onto the computery stuff, here’s a bit of (morbid) scientific history to help illustrate why you need a problem solving plan to formulate the best conversion model.
The Problems of Medicine in 1800s
In the 1800s, medical professionals struggled with three main problems which caused lots of people to die horribly. I learned in my GCSE history class that these were:
- The Problem of Pain
- The Problem of Bleeding
- The Problem of Infection
Infection was particularly tricky to deal with because we didn’t know how it was caused, but a doctor named Semmelweis set out to save the day using a simple scientific method.
Semmelweis worked as a doctor on a maternity ward in the 1800s.
In his ward, an unusually high percentage of new mothers died of what was then called childbed fever. Semmelweis considered many possible explanations for this high death rate.
One was that the fever was caused by doctors’ unclean hands (the doctors often performed autopsies immediately before examining women in labour). At this stage in medicine, doctors tended not to change or wash their clothes and hands between patients.
If childbed fever were caused by doctors’ unclean hands, having doctors wash their hands thoroughly with a strong disinfecting agent before attending to women in labour should lead to lower rates of childbed fever – this was Semmelweis’ hypothesis.
When Semmelweis enforced the washing of hands in his ward the rate of the fever plummeted; the actual observations matched the predicted results, supporting the theory that the fever was caused by infection. Funnily enough, things changed slightly following this revelation. So, putting this example into practice, here’s how you can construct the best conversion rate optimisation test and become a conversion scientist.
Look for clues
Semmelweis got it easy in this part of the process, as the problem he was dealing with was quite an obvious one. But if you’re planning to run a CRO test, let’s assume your client is worried about the lack of form submissions on their site. Who isn’t?
But wait. Close Photoshop.
Let’s not go designing anything until we’ve got a good idea of what’s causing the lack of submissions.
The Problems of Marketing in 2016
Websites struggle with similar problems that cause users to leave without converting:
- The Problem of Ugly (first impressions definitely count)
- The Problem of Empuzzlement
- The Problem of Malfunction
- The Problem of Misrepresentation
- The Problem of Trust
- Many, many more…
There are several things we can do to investigate which problems are holding your website back and all of these methods have their advantages and disadvantages, so it’s recommended you use all of them in your investigation if possible.
Google Analytics reports
Good old Google Analytics allows you to look back at why form submissions may have dropped. Is it a steady decline or a sudden drop? What changes were made to the site at this time?
Take a look at the website’s traffic sources and ask yourself if the forms are still as relevant to the new traffic as it was to the old. CRO testing is as much about constant adaptation to the online environment as it is about blowing up your conversions.
Some may be declaring ‘that’s how evolution works!’ – Not exactly, but keep thinking science and we’ll come back to that later.
In-page analytics tells you much of what a heatmap reveals, sometimes more. Use it to figure out what potential customers may be distracted by on their path to conversion.
Is it that they are being misled and clicking away from the form instead of towards it? If so, you may have a typography, layout, or lexical issue to deal with (the problem of confusion).
In-page analytics works using the destination URL of each link, so if there are multiple links to one URL on the page you’re examining the data becomes ambiguous. This is where specific heatmap tracking is useful.
Wouldn’t it be a powerful feat if we marketers knew what our website’s users were thinking?
It may sound fanciful at first, but there are more than enough ways to ask them questions and get direct feedback, including:
- Feedback popups
- Help/live chat popups
- Popups on exit (why are you leaving?)
- Analyse your FAQ pages to see which questions get the most interest (your goal should usually be to make the FAQ redundant)
- User Testing recordings (volunteers record their experience of your site and review it based on a set of tasks that you define)
Make a hypothesis
Think you know what the problem is? Then let’s turn this problem into something we can test.
The best, most precise, tests can be broken down into a single simple statement, or hypothesis. This will take the form of something akin to ‘If’, ‘Then’, ‘Because’.
X – The independent variable is the thing we will be changing to find out if it will have an affect
Y – The dependent variable is the thing we want to measure – in the example of the form submissions, conversions are the dependent variable.
Z – Your reasoning should suggest that you’ve solved the problem that brought you to the test, for instance if you identified that your website may be suffering from the problem of ugly, then your reasoning will hopefully be, ‘because our customers find the website easier to look at’.
If the size of the text and buttons are increased, then conversions will also increase, because the text will be easier for users to scan and find information more effectively.
Remember, the speed at which users find information is not our dependent variable; it’s the conversions that we’re measuring. We will look at page statistics more closely after we’ve run the test.
Also remember, everything other than these three variables should stay THE SAME in each variation. This means the pages you are testing should stay the same. The other calls to action, or anything that might influence the result should stay the same.
Select a test type
In most cases, a standard split test is the simplest way to run a simple, concise and fair test, but a great variation of this is the multivariate test.
A standard split test should
distribute traffic equally between a control version and the test variation.
A multivariate test is slightly different, by allowing you to test multiple changes in every possible combination. It’s a good way to test font sizes, messaging, calls to action and design elements like button colours, where the options are not as clean-cut as A and B.
It’s also a good way to make sure everyone in the ‘committee’ gets to test their idea whilst eliminating ‘pairing bias’ (an apparent positive result for a change actually caused by a change that it’s paired with).
Try to use as few variations of ‘X’ as possible in the first instance or you run the risk of diluting your results so much that it takes a long time to see even a tiny pattern emerge in ‘Y’. If your traffic is on the lower side then conduct A/B tests in stages, sending as much traffic as you can through the test and attaining an actionable result as quickly as possible, but through reliable testing.
Improving your site in small increments can often be less time consuming and better for your cash flow than trying to conduct convoluted and overly technical tests.
Get results – draw a conclusion
If your split testing software gives you the ability to create custom goals, USE THEM.
Use them as much as you can! Over-use them.
You can create a custom goal for whatever you like. Ideally, I aim to create a goal for every possible major action that a user can take. Anything you might want to analyse using In-Page Analytics needs a custom goal because In-Page WILL NOT WORK during the testing period.
I use these custom goals not to judge the success per se, but to analyse the user behaviour – different software may cope with this to different degrees of effectiveness, but the important thing is to gather as much data as you can – you can use as many of the aforementioned research methods as you like:
- Feedback popups – track positive and negative responses in separate goals
- Popups on exit
- User Testing recordings
The great thing is you don’t need Google Analytics to get these goals to work.
Hook up with Analytics
Most split testing software should allow you to hook up your tests to your analytics account. This creates a custom variable that you can analyse in your report as the data comes in. Go to:
Audience > Custom > Custom Variables
Select the key (if you defined one) that corresponds to your test. You should now be able to use your website’s standard Goal Conversions to judge which test variation is the victor.
Just be sure to test this before launching the campaign.
How long should I run the test?
This is a question I receive time and time again and clearly there is no definitive answer.
Often, when a test starts, one variation will take an early lead and it’s easy to be fooled, after a short period of testing, into acting upon it as soon as possible.
However, there’s one element of a test that is still subject to chance and that’s the users.
Avoid Diversity Bias
Let’s take a random sample of 50 of your website’s users and split them into two groups of 25. Your sample is random to eliminate bias in your selection, the two groups were divided randomly, so your two groups should represent your audience equally, correct? That’s a fair test, right?
Well not necessarily. Take a look at your two random groups of users… how similar are they?
If you’re working with a diverse audience, let’s say if you’re a large ecommerce store with a variety of stock, then you may have at least 50 different identifiable customer personas that you’ve put through your test. That means you’ve only tested each persona once in this case, and 100% of your personas have only appeared in a single test group. That’s not a fair test, even though it was random. Now that’s what I call diversity bias!
Your sample size must be adequate to represent each likely customer persona involved in the test equally in each test group.
A large ecommerce store will need a much larger sample than a small B2B agency whose audience is far less divided in terms of their end goal. So, how long should you run a CRO test? However long it takes to eliminate the potential diversity bias in your specific audience.
Try to Disprove your original hypothesis
Draw your conclusion(s)
Let’s imagine you’ve launched your campaign and it ran smoothly for about a month, long enough for you to collect sufficient data to show you’ve no statistical reason to believe that you have not found a winner.
Now you’re set to draw a conclusion based on the data you’ve collected.
Your analytics account indicates that your hypothesis was correct or incorrect and you’ve got a shed load of on-page data which you can use to help support your logic. There are several outcomes to this which I like because it gives me an excuse to make another graphic.
Repeat, Evolve, Implement
If all the pieces of the puzzle fit snugly together then it’s time to think about implementation. But sometimes it doesn’t all go to plan. You may find that some of your data doesn’t add up, or downright contradicts everything you thought you knew.
This is life, but this time we have data on our side and we are going to make the best of it.
If you don’t feel ready to implement the solution you can repeat the test.
For unexpected results, make a new hypothesis on why your data came out the way it did. Make some changes, then look in your new results and draw a new conclusion. Keep going until you’re confident enough in your variation to implement it.
Repeat the test, but apply the principles to a different test page or set of pages. Will the variation work across the whole site or just part of the site? Knowing the answer to this can save a lot of development and testing time. If you make changes to the variations or to the pages make sure you run it as a new test so that you don’t corrupt your original data.
You’ve made it around the board once, but there is always potential for further improvement.
For most companies which are looking to make a profit, a positive result is a positive result. If this is you, then it’s time to develop that solution for real.
While that’s being worked on, take advantage of the traffic allocation in your split test. Duplicate your test and send 100% of the traffic to the test variation to start benefiting from the improvements immediately whilst waiting for the real thing to go live.
This is how, over time, ‘tried and tested’ solutions are established. Not through trends and web design’s latest fashion fads, but by discovering solutions that are proven to work.
Work with us
Why is your traffic not turning into sales? Whether you feel your site is under-performing, or it’s performing well and you’d like to push it even further, Receptional can help you keep up with your online audience, understand them and deliver what they’re looking for when they come to your site. Get your lab coat on and give us a call today.