Inside the brains of three conversion experts – 5 questions / 15 answers
14 August, 2015 - 08.13 AM
Chris Gibbons, Red Eye International
Job Title: Head of UX/CRO/Personalisation
Hobbies: Badminton, Painting, Real-world UX and optimisation (although more of an obsession than a Hobby!)
Daniel Grainger, TUI
Job title: Lead Online Conversion Analyst
Hobbies: Family days out with my wife and toddler, tennis, gym, city breaks
Tom Capper, Distilled
Job title: Analytics Consultant
Hobbies: I like hearty food, ale, political debate, distance running, Oxford commas, and non-sequiturs.
1. What’s the worst test you’ve ever launched?
If by worst you mean one that resulted in a pretty big drop in conversion rates, a ‘good’ example was a test we carried out on a car insurance quote page (the page you land on from a comparison site or after filling in all your details). We experimented with placing the savings packages/bundles at the top of the page above all the individual full-priced add-ons rather than underneath (the control). The hypothesis was good and there were enough supporting insights from usability testing and eye tracking to suggest that this was worth a try. However after launching the A/B test for 2 weeks we were all surprised that the variation (which most of the team loved) actually resulted in 12% fewer add-ons sold and 11% less revenues overall. Wow!!! Whilst this could have been seen as a bad thing in fact it was a great lesson in CRO, which highlighted to the client the importance of testing and experimentation. Before RedEye and CRO they would have just implemented the variation they liked without testing and potentially lost hundreds of thousands of pounds with this one change. We then able to take the learnings from this test and implement some very successful follow-on tests, which did result in significant uplift.
A “more information” test I ran some years ago had surprising results. The purpose of the test was to provide a “more information” link in a product hero banner, in various placements in and around the primary order CTA, and also a variant simply replacing the order CTA. Our theory was that elevated visibility of the product information page would improve conversion, as customers would be able to find the information they needed more easily. The result surprisingly found a low proportion of users actually clicking the new link, and a fall in conversion/ revenue per visit across all variants (particularly the variant with the replaced order CTA). The phrase “less is more” was certainly true in this instance!
Some of my first forays into split testing were essentially too hasty. Sometimes they were based on what our own user testing had revealed without taking into account the preferences of existing users, or sometimes they were changes that were too focused on a single metric at the expense of the bigger picture. It’s easy to take a “test everything” attitude, but running experiments with mediocre research and preparation just leads to wasted time and lost revenue.
2. How should a company handle conversion optimisation across multiple channels?
You have to get into a different mind-set. The customer journey doesn’t start and end on your website so your whole team must start thinking about entire user journeys rather than single web pages or emails in isolation. The biggest challenge is getting the organisational structure right and making sure that teams are not siloed – that they instead start to work closely together.
To an extent this will depend on how the company is structured in relation to digital marketing. Does this team sit in the same area as the conversion optimisers (e.g. a broad eCommerce/ Digital team) or does it sit elsewhere (e.g. Marketing)? Conversion optimisation should be handled as an end-to-end approach covering the full length of the customer experience, from digital marketing click to landing to purchase (and even beyond if the organisation allows). Clearly there are multiple channels to deal with, which is where attribution models play their part. Organisations can then seek to utilise attribution segments in their conversion optimisation programme, rather than simply focussing on siloed channels.
This isn’t a problem that necessarily has to be addressed head-on – most of the time if you’re running tests using a tool like Optimizely with a large number of users across all relevant channels, the test should be representative. If you know you have a channel that generates very few visitors but a huge proportion of revenue then perhaps you need to consider excluding it or otherwise adapting your test, but I’ve never encountered a site with that sort of situation.
On the other hand, different channels do have different characteristics, and a lot of sites run dedicated landing pages for certain channels – PPC or email most commonly. In these cases, testing these pages in particular makes a lot of sense, and it could also be worth testing to see whether you’d benefit from channel-specific pages. How you prioritise these comes down to the value of these channels to your business – if 50% of your revenue comes from PPC, you should absolutely be considering PPC-specific conversion rate optimisation.
3. What are the biggest mistakes most optimisers are making today?
1) Not actually doing a lot of testing. We’ve come across several very large companies who have very expensive A/B/MVT tool contracts, have loads of traffic, but do very little testing. There are many reasons for this e.g. they don’t have the support or resource or the right agency to help them; they don’t have a CRO process/strategy; they chose extremely complicated first couple of tests that never really got off the ground and this put them off; they don’t yet have a culture in place to support testing; or lastly that they just haven’t yet realised the value in CRO.
2) Not including UX or any qualitative research as part of your CRO process. Basing test ideas purely on gut instinct will result in a low success rate and/or low impactful tests.
3) Poor test design. E.g. not designing the experiment in a way that will ensure you learn from it whatever the final result.
4) Testing too many variations e.g. a multivariate test where it will take months to conclude. It’s so much better to have carried out 6 iterative A/B/n tests in the same time period.
5) Drawing wrong conclusions from tests.
6) Internal teams working in silos i.e. keeping the whole CRO process to a small group of individuals and not letting anyone else in! CRO is for sharing – it should be a fundamental part of the entire digital strategy.
By far the most common I see is (still) no use or misuse of statistics when reporting. A lack of statistical rigour in a conversion optimisation programme can bring issues. We will all be familiar with claims of ‘1000% uplift’ undoubtedly misusing statistical significance, which itself is not the only tool at our disposal, it’s simply a small part of the jigsaw to ensuring sound business decisions are made.
There are some fairly basic measurement errors that are still extremely common – for example, optimising for conversion rate without paying attention to changes in average session value, or starting a test without a predetermined end date. I’ll be talking about some of these issues in my statistics session at Conversion Conference.
4. What’s the biggest opportunity for conversion optimisation in 2016?
Personalisation on the web and across all channels. Fundamentally this is about treating your users and customers differently because they have different needs and motivations. So from an optimisation point of view this is where the greatest gains can be made. Ultimately it’s what being user-centred or customer focussed was always supposed to be about! This is why it’s such a massive focus for RedEye.
We can’t overlook the continued growth of touch devices. This movement still hasn’t peaked, and neither has the quality of the touch offering provided by the majority of organisations. Over and above this, one of the biggest opportunities in my opinion is the continued research into this area, and understanding of how a touch user’s wants, needs and subsequent actions differ to other devices. This undoubtedly varies from industry to industry but in travel we’re seeing vast differences between devices, which reiterates the importance of truly understanding each medium and incorporating these key learnings into our digital strategy.
Extending the scope of A/B testing. This is something that a few larger players are already doing for organic search, including Pinterest. We’re currently working on a platform at Distilled to allow our clients to benefit from easily and simply testing changes to areas like their site architecture, SEO directives and keyword targeting, as well as the more conventional areas of interest for conversion rate optimisation.
5. Which book – whether about conversion or not – has taught you the most about optimisation?
Any of Malcolm Gladwell’s books (e.g. Blink, David & Goliath, Outliers) because they’ve always made me think a little differently about the world. They’ve taught be valuable lessons about not taking things at face value – which is a very useful perspective to take if you’re a CRO or UX-er.
Two instructional books I highly recommend to the new starters in my team are “Don’t make me think” by Steve Krug and also the very quick and easy to read e-book “Master the essentials of conversion optimization” by Peep Laja, Founder of ConversionXL (http://conversionxl.com/)
The usual suspects; Avinash Kaushik’s “Web Analytics An Hour A Day” and “Web Analytics 2.0”, along with Ash/ Page/ Ginty’s “Landing Page Optimization” and Steve Krug’s books on usability. I’ve never actually read any of them cover-to-cover, more as a reference guide; in my opinion this is a better way of managing your education while working. I’ve also found that blogs (e.g. Avinash, eConsultancy, ConversionXL, PRWD, etc) and networking events tend to be better resources on many occasions, as the subject matter stays more recent.