by Adam T. Sutton, ReporterLanding pages typically have one purpose -- to drive conversions. This should make it easy to test and improve them. But this isn't exactly the case.
Identifying which landing page elements to test can be daunting. There are potentially thousands of tests for every page -- many of which will have little or no impact. You have to ignore the irrelevant tests and chase the potential winners.
"People are looking for easy answers, and the answers, unfortunately, are not easy," says Lance Loveday, CEO, Closed Loop Marketing.
Loveday's team has utilized a data-driven approach to landing page tests for as long as modern A/B testing software has been available. Below, we outline the steps he suggests for planning tests that can improve a page's performance.
Step #1. Gather relevant dataData is instrumental in identifying landing page issues. Many marketers choose tests based on hunches and instinct, but you’ll have more success if you do the research.
Data to gather:
- Quantitative data
Obtain as much data as possible on traffic, click patterns, conversion rates, bounce rates and other relevant metrics. Make sure you can segment the data by traffic source to compare the performance of natural search traffic versus email traffic, affiliate traffic versus paid search traffic, etc.
- Qualitative data
Usability tests might seem cumbersome or expensive, but they are often valuable for diagnosing page problems.
"It almost always yields insights that you can’t get any other way, and which allow you to really get into the minds of users and appreciate concerns that you might not have thought about," Loveday says.
- Expert opinions
Having someone with a history of landing page design and optimization look at your page can also help focus your efforts, especially when identifying problem areas.
Step #2. Identify areas for improvementYou must review your data and research to find problems, such as underperforming traffic segments, conversion road blocks or layout problems.
Here are some common indicators:
- High bounce rates
A page’s bounce rate is the percentage of visitors who arrive and immediately leave, or who leave without clicking a link on the page. Comparing bounce rates for different traffic segments can reveal which visitors are finding your page valuable and which are not.
You can compare traffic segments by source, geography and other categories, but often one of the most revealing is the difference between first-time page visitors and returning page visitors, Loveday says.
"New versus returning is one of the most straightforward usually. If we see the bounce rate is dramatically off or higher than it should be for one segment, that might provide insight that we’re missing an opportunity to reassure a likely concern."
- Irrelevant clicks
Clicking behavior can reveal the type of information visitors are looking for. If you see a high number of clicks on areas which are not relevant to the landing page’s main goal -- such as on navigation links or a header -- it might indicate traffic to the page is not properly qualified.
- Conversion road blocks
Usability tests can uncover why visitors feel discomfort, whereas analytics report the results of the discomfort.
For example, a landing page may offer a steep discount on a product. If visitors do not see the discount clearly applied on subsequent screens in the checkout process, they may not be comfortable completing the purchase. A usability study might find users saying, "Hey, where’s the discount?" and an analytics system would report an abandoned order.
"It sounds like common sense, but that type of thing is often overlooked and causes users to have hesitation and stop, and it harms throughput and conversion rates," Loveday says.
- Bad first impressions
Expert reviewers are often good at addressing a landing page’s subjective problems -- such as its look and feel. Although criticism is not as concrete as data analysis, don’t scoff at the idea that your page looks "cluttered" or "spammy."
"There is a lot of research that backs up the importance of making the right first impression," Loveday says. "People begin to respond to a new interface and form judgments about the site behind it and the credibility about the organization very quickly -- in as little as 1/20th of a second. And the judgments they make in that split second ultimately impact the likelihood they’ll transact with that organization."
Step #3. Estimate the test’s potential impactTesting takes time. Your results have to reach statistical significance to be considered reliable. Also, you might need to run several tests. Be sure to evaluate the potential impact of any tests and concentrate your energies on the best bets.
"We don’t want to oversimplify and pass up larger opportunities for the possibly smaller -- although more obvious -- opportunities, like making a button larger," Loveday says.
For example, on a product-selling landing page, a team must choose whether to test changes that might increase the overall add-to-cart rate, or test changes that might increase the purchase-completion rate for a particular traffic segment.
In this case, the team should target whichever direction would earn the company more revenue.
Step #4. Develop hypotheses on the causesAfter your team identifies areas for improvement, ask yourselves why the target is underperforming. For example, if you’re targeting high bounce rates for a particular traffic segment, ask yourself: Why are these bounce rates higher than other segments?
Here’s a more detailed hypothetical example:
A team encourages visitors to download a whitepaper from a landing page that asks for a full name, email address and office phone number. The page’s paid search bounce rate is 8%, and its display advertising bounce rate is 23%. The team wants to lower the bounce rate of the display ad traffic.
A suitable hypothesis could be: "Our display advertising is not clearly communicating that we are offering a free whitepaper." Or perhaps "it is not clearly communicating the topic of our free whitepaper."
The most suitable hypothesis will be specific to your situation. Research the elements surrounding the data you’ve identified and root out the causes. Don’t be afraid to list several hypotheses and select the strongest contender.
Step #5. Develop hypotheses on the solutionsOnce your team thinks it understands the problem, it’s time to consider potential solutions. The solution should be directly tied your hypothetical cause, and it should be something your team can test.
Using the above whitepaper example, your solutions could be:
o Change display ads to emphasize the free whitepaper
o Change display ads to emphasize its topic
o Advertise to a better-targeted audience
These hypotheses will be the basis for your tests. Also, avoid difficult-to-test long-term solutions such as "improve perceived value of our whitepapers."
Step #6. Run tests, monitor results, and be patientYour team should used well-established testing software and processes to launch and monitor tests. Running tests poorly by not concurrently testing variations, or not testing enough traffic to achieve statistical significance, is counterproductive. There is no sense in doing this work to achieve unreliable results.
Also, some testing platforms will allow your team to limit the volume of visitors who view a page’s test versions. For example, instead of showing variation A to 50% of visitors and variation B to the other 50%, you can show your team’s usual page to 90% of visitors and the test version to 10% -- or any other ratio.
Using an uneven traffic split is helpful when your team is testing major changes that could impact brand perception or another area of your business. Although the results will take longer to reach statistical significance, the test is less likely to have an immediate negative impact on business.
Useful links related to this articleMembers Library --
Custom Landing Pages for PPC: 4 Steps to 88% More Leads, Lower CostsMembers Library --
Master the Art of Multivariate Testing: 7 Lessons from Avis Budget GroupUserTesting.com: Service Loveday's team uses for usability testing
Closed Loop Marketing