by
David Kirkpatrick, Reporter
Implementing a testing and optimization program presents two big challenges: creating a testing culture within the company, and maintaining momentum once the program is in place.
Cabela’s, a sports and outdoor product retailer, is an example of success in both of these areas. In 1961, the company began keeping a customer list on postcards, and as its catalog business grew, it almost immediately began testing these direct mail pieces by sending different customers different versions of the catalog.
Fast forward to 2005 and Cabela’s began a testing and optimization program on its digital marketing efforts as well. This program remains in effect to this day and is a testament to the strong testing culture at the company, as well as an example of perseverance.
Tony Uhlir, Internet Customer Experience Manager, Cabela’s, said, "Starting a testing program is easy, and a lot of times it is started because somebody heard that split testing is a best practice, so we should be doing it."
He continued, "There will be a lot of support to get started. The challenge there is to keep the interest level high because without that you will run into problems in having the ability to pull the resources you need to work on a (testing) project."
Uhlir shared three overarching tactics on what Cabela’s has learned over the last seven years of its digital split testing and optimization program. These include how to assemble a testing team, utilize analytics, and prioritize resources within the program.
Tactic #1. Put the testing team together
When the program began in 2005, Cabela’s had one employee who managed each test with a project-driven approach. After some time, the company realized that a testing team that included multiple business disciplines, such as usability and design, would be more effective.
"We expanded it to include a broader group of business experts to ensure that we were covering all our bases when we design a test, and have thought about all the things we should be measuring," explained Uhlir.
He said one key element was having dedicated resources to run the testing projects, including a project manager who could make sure everything was done to "get the test off the ground."
At Cabela’s, instead of pulling from internal resources, the company specifically hired a new employee for the role of split testing project manager. The company also hired a technical lead for the testing team.
Part of the tech’s position involved working with company statisticians to ensure the tests were set up to be statistically valid. This was accomplished by modeling the test to understand how large of a sample size was needed, along with other statistical functions, to create a valid test.
Uhlir said the testing process brings different team members into play through three high-level steps:
- Request and Prioritization -- This step involves managing the request process with the project manager working with the requestors to provide stakeholders with enough information to understand the level of effort, risks and potential benefits of the test before a Prioritization Committee review.
Team members involved: project manager, technical lead, prioritization committee and program sponsors who are informed about decisions and progress
- Test Development -- This step involves three stages with different team members in each.
Test design: usability group, content designers, technical lead and project manager to coordinate documentation and communication
Defining measurements: statisticians for modeling and sample estimates
Build and quality assure test: technical lead, QA team and business
test group
- Execute Test -- Exactly what this step sounds like, the actual deployment of the test.
Communication: project manager keeps appropriate groups informed
Monitor test progress: project manager and technical lead
Post-test analysis: project manager, technical lead, analytics teams and statistician team
Summary development and results delivery: project manager, statisticians, usability group
Uhlir said it is important that the core team is dedicated for the program. For example, Cabela’s has a graphics person who is designated for the testing program. That way the project manager doesn’t have to go out and find an available graphic designer when creating a new test. That team member is already in place.
Tactic #2. Think about analytics … both online and off
Analyzing results and taking action on that analysis is the heart of a testing and optimization program, and how that analysis is handled will be unique to every different testing effort.
Cabela’s combines Web analytic tools with internal analytic capabilities, according to Uhlir. The marketing team collects Web analytics and takes advantage of software tools, but the in-house analytics team provides the final results.
Because the company works in multiple channels -- digital, inbound phone sales, in-store retail sales and catalog orders -- Uhlir said it was important for the team to be able to match data across all of Cabela’s channels.
"We can not only test what impact (the tested element) had on our online channel, but we can also do some analysis on what impact it had on our offline channels as well," he stated.
The team pulls in Web analytics, and then uses its customer identification ability to match that data to the house file. That in turn provides visibility into what that customer may have been doing through outside channels, such as retail stores or call centers.
With this ability, the team tied Web analytics to back-end transactional data.
He added that when the digital testing program began in 2005, the team did not have the ability to match data from the online channel with the offline channels.
This data integration between the different channels was a byproduct of creating a more cohesive data ecosystem, but it also impacts the split test program.
"By integrating offline data with our Web analytics, we have essentially made all the capabilities of our back-end analytics available for measuring in our split tests," stated Uhlir. "We are not limited to only measuring at the website; we can expand that to other variables."
These cross-channel metrics are used at Cabela’s for both reporting and for split testing.
Uhlir provided several examples of how the team leveraged this cross-channel information in digital split testing:
- Add alternate payments -- required measuring the impact on new-to-file rates (a metric on how Cabela’s measures new customers added to the customer file compared to existing customers), and how many Visa cardholders were using the new payment methods instead of the Cabela’s branded card
- Measure the impact of different remarketing advertisements -- required the inclusion of offline store and call center sales and conversion
- Measure the impact of offline marketing -- which of two print catalog versions performed better
- Measure the response and quality of new credit card applications with different introductory offers
Cabela’s has learned to build a standard around the metrics tracked and utilized. Uhlir added that this standard will help streamline the testing and data collection process, and prevent the team from having to "start from scratch" for every test.
Tactic #3. Create a process to prioritize test ideas
Because requests for specific tests are open to anyone at Cabela’s with an idea, an important part of the process is reviewing and prioritizing the tests to run.
"The request process is open to anyone within the company who has an idea," said Uhlir. "They can submit their request to our split test project manager, and it goes on a list as something to consider."
From there the request goes to a small team, including the project manager, that makes recommendations about what tests might be most valuable to pursue first.
At that point, a broader group with representatives from different business units discusses the details of the refined list, including:
- Benefits
- Risks
- Potential complications
"There is what we call a ‘core team’ that is responsible for weeding the ideas into enough detail to put in front of a little bit bigger group that can then understand how each test fits in with the other requests that are out there," Uhlir said.
He added that once you start asking people for ideas, they "come out of the woodwork."
When starting a testing and optimization program, it is import to prioritize a test that is not overly complex and won’t take a month to build out, Uhlir suggested.
He said, "We wanted to get in, prove out the system, validate things to make sure everything was working, and build confidence within the team of the fact that this is working."
Uhlir continued, "If you are just starting now, you definitely want to make sure that you work through some small projects to get everybody on board, that the results are real. And what we found in doing something overly complex is it essentially adds a lot of risk in doing something incorrectly, which then becomes a ‘killing field’ for the results."
Once the testing foundation is built and the metric standards are set, that is the point where Uhlir recommends adding the more complex and longer time-framed tests.
Another consideration when prioritizing tests is to ensure the test is worth running and will result in some sort of action based on the outcome, and not just testing for testing’s sake. Uhlir stated that it is important for the longevity of the testing and optimization program for the ongoing effort to show its return on investment.
Without the ability to prove a return on testing, the entire program is at risk of losing organization support, or even possibly being defunded.
Uhlir will be presenting "Top Three Tips for Testing Success" at the upcoming MarketingSherpa Optimization Summit 2012 in Denver, June 11-14.Sources
Cabela’sRelated Resources
Marketing Research Chart: In-house expertise challenges to landing page optimizationLanding Page Testing and Optimization: How Intuit grew revenue per visitor 23%Landing Page Optimization: How to start optimization testing and get executive supportWebsite Optimization: Testing program leads to 638% increase in new accountsConversion Rate Optimization: Minor changes reduce cost per conversion 52.9%Marketing Research: Top email elements to testForm Optimization: 3 case studies to help convince your boss (and Sales) to reduce form fields