The more you understand customer behavior, the better able you are to create engaging experiences and increase the performance of email campaigns. That’s why A/B testing is such a valuable tool. You get real customers to tell you what they prefer by voting with a “click”.
No more time wasted weighing the merits of one font over another; or the best wording for a call-to-action button. Let the numbers point the way!
Wouldn’t it be great if you could go beyond testing for preference and actually collect data on which elements of the experience contributed to the lift in conversion?
A/B testing is an “either/or” situation. If you want to test button color, you need to test your options separately to determine the real effect of each change individually. In our hypothetical test, the standard call-to-action button is red. To see how a change in button color could affect the campaign, you’d send an email to:
When testing a change in button color, you’d track click-to-open rates, which are more accurate than simple click rates, to see which color option triggers greater activity.
In our example, red was the winner over blue. If you wanted to see how a green button performed versus a red button, you’d have to conduct two separate tests in addition to the original above (for a total of three).
For more accurate results, I’d suggest repeating the same tests several times. [Note: all tests in this blog post are hypothetical, so I’ve not provided results.]
For marketers who need to demonstrate ROI, A/B testing clearly has limitations since it can only be used to test for one variable at a time. If we kept our example very simple and emailed only one campaign a month, it would take three months to complete the test! If the deployment schedule is once a week, test results still wouldn’t be ready for four weeks.
A/B testing of more than one item can also introduce unknown variables which could throw your results out the window. How do you know if your winner was actually the result of your variable (button color, etc), and not just the content of your email? Was the day and time you sent the email a factor?
But what if you want to test five other colors? Or three different CTA’s? What if you have separate editorial images to go with each CTA color? Clearly, testing all these options one by one would not only be mind-numbing to set up, but take ages to run.
Are you getting dizzy yet? Data scientists do this for a living. They’d repeat the test over and over, and aggregate the data for statistically accurate results.
You shouldn’t need an advanced degree to conduct your own tests and improve your ROI. Enter our hero: multivariate testing.
Don’t let the name put you off. If you break “multivariate” down into two parts, its meaning becomes clear: multi =many and variate = versions, a.k.a. it enables you to test multiple things at the same time.
Say someone on the marketing team wants to change the CTA text from “click here” to “buy now” but another person thinks it should actually be “learn more”. Sure! No problem. Your boss wants to try different colors for the button while you’re at it. Why not? What about changing the button placement too? Bring it on!!
We can test all of that at once – AND get accurate results! We’ll also be able to point EXACTLY to the one change that made the most difference.
Say you wanted to test images and button color. The three test colors are red, green and blue. The image choices consist of a cat, a dog and a bird. The control version has a cat image and red button that says “click here”. The other combinations to test are pictured:
Multivariate testing allows you to look at each element on its own, all at once, without introducing any new or unknown elements (such as time of send, content, date sent) that could otherwise affect results. The customer database would be equally divided among the test samples and a control group to see which is the most popular.
You find that people happen to LOVE the image of a dog and click like crazy. And they hate red. So the version with a dog and a red button fails to perform well. By determining which elements your audience prefers, you can uncover a version you hadn’t considered before that’s a real winner: a dog image with the blue button!
The image and button color in our hypothetical test can reveal other opportunities when you take a deeper dive. While the blue button with a dog seems to be the winner overall, an analysis of different customer segments – frequent shoppers, gender, or age range – shows women ages 40 – 45 actually prefer the green button with a cat.
Surprised by this result? It didn’t show up on first look because there just weren’t enough women in the database to affect the numbers as a whole! But what if those women click 75% more often on the green CTA? How much revenue might have been lost if you just sent blue CTAs to everyone?
By “slicing and dicing” the findings of the multivariate testing according to different customer segments you can create multiple versions of your email that are fine-tuned to increase conversions. Just think of the clicks and conversions you can generate if you employ these common types of multivariate tests to refine your marketing campaigns:
Multivariate testing is a great data party you don’t want to miss out on. Don’t be shy. Try out new ideas since there’s no limit to the number of variations you can test. You’ll gain valuable knowledge for creating better targeted marketing campaigns that drive greater ROI.
Try multivariate testing yourself and let me know how it goes by tweeting us @yesmail.
I’ll be back shortly with another post on testing to help you further refine your emails for better targeting and results. In the meantime, check out our comprehensive guide for demographic segmentation!