MarketingSherpa Video Archive

Web Optimization: How one company implements an entire testing strategy every day

Daniel Burstein, MECLABS, and Ryan Hutchings, VacationRoost



At Web Optimization Summit 2014, recently held at The TimesCenter in New York City, marketers gathered from around the world to learn from transferable case studies about how to tackle conversion rate optimization.

Many of the speakers were brand-side marketers whose daily responsibilities include improving marketing efforts by testing and optimizing the customer experience. One of those marketers was VacationRoost's Ryan Hutchings. As Director of Marketing, Hutchings presented his story to a live Summit audience for a session that doubled as a MarketingSherpa webinar.

Hutchings' presentation set out to answer the question, "How can marketers systematically run tests to achieve consistent lifts?"

"Even with random testing, it doesn't matter how many you do. [The tests] don't make sense all together," he explained to the Summit audience.

Hutchings shared how VacationRoost lives by two testing strategies to always be determining what works best for its audience: small tests and large tests.

For small tests, Hutchings and the team at VacationRoost utilize third-party software to test PPC and SEO landing pages, calls-to-action, headlines and images.

For large tests, Hutchings uses A/B split testing on search results, shopping carts and products and performs manual reporting.

Watch this webinar replay to learn the benefits of continual testing and how you can apply Hutchings' efforts to your own situation.

Download the slides to this presentation

Related Resources

Web Optimization: VacationRoost implements 2 testing methodologies to boost total conversion rates by 12%

Web Optimization: Can you repeat your test results?

Web Optimization: How AARP Services boosted renewals by increasing usability

Web Optimization: Ancestry.com improves conversion 20% by reducing choice barriers

Video Transcription

Burstein: We're going to be talking about the nuts and bolts of how you actually test. How do you test programmatically? And to do that, please join me in welcoming Ryan Hutchings, the Director of Marketing for VacationRoost. Thanks for joining us today, Ryan.

Just so you know, he came in from Salt Lake, he was supposed to come in last night, wasn't going to make it in time for this talk, had to rebook flights, did everything he could to be here, so we appreciate that. And in Salt Lake, I know you're not just a tester, you run a search engine marketing group, too?

Hutchings I do. Yeah, a nonprofit organization in Utah. We organized all of the online marketers, or we made an effort to. So, yeah, we meet once a month for little mini events and stuff like that, too.

Burstein: So if you're interested in search engine marketing, you can talk to Ryan about that topic as well. But tell us a little bit about VacationRoost.

Hutchings: So at VacationRoost, we are a vacation rental wholesaler. We don't own properties or anything.

The easiest way that I can explain it is we're kind of like an Expedia in the vacation rental space, except we actually layer on customer service in kind of a customized concierge experience to booking a vacation rental.
We are an aggregation of a bunch of smaller companies. That's how the company grew overtime.

And one of the more important things is to realize that we have a lot of different websites, and Web properties, and experiences for a lot of different customers. So that allows us to run a lot of experiments in different areas and sometimes it's all at the same time, sometimes we segment it out.

For a marketer, it's ideal, because I have this whole entire playground to essentially do whatever I want with, and we have a lot of control in my company as far as the marketing team goes, so we can pretty do whatever we want every day, and it's great and nobody cares, unless things go down.

Burstein: I wouldn't say that no one cares because you're getting some pretty impressive results, right?

Hutchings: Right, that's all we show them is these type of things.

Burstein: So this really leads us to a fundamental question. Look at the results on the screen, getting a lot of lifts, running hundreds of tests.

How can marketers systematically run tests to achieve consistent lifts? That's what we're going to be talking about today. How do you set up a program of testing and optimization? Not just running one test here or there, but actually testing to learn to serve your customer better.

Hutchings: So I'll start this by saying the entire testing process, so my marketing team, we have a team of six, and it's me and one other individual. His name is Colby Gilmore, and he would be a good reference, too. At the end of this, if you questions, I can give you his information.

But he's a very analytical thinker and he creates these flowcharts for us. So you need somebody that maybe has this type of mindset to help you set a really good foundation for the process. I agree with all of this, but then I look at this and then I just go, OK, great. There is a lot going on there. I just skip past it, and I go to something like this.

Burstein: So tell us what this means. How do you support this iterative testing?

Hutchings: So we fully adopted this conversion heuristic. And then we say, "Well, if I come to a conference like this," which I have, "how can I have this be pretty much the main portion of what marketing does in everything? How can I have testing be a regular thing, it's always going on, it is my marketing effort almost, essentially, what we spend time on?"

And so you have these stages that we have come up with that you have to hit every time you do a test. And when we talk about hitting it every time, it can be spending five minutes on each of these things or maybe for a larger test, it's spending multiple days, but you go through this whole process every time for any level test that we do.

Burstein: And we are going to walk through Ryan's process step by step, but I wanted you to pay special attention to this slide, because we are going to talk about two major challenges that we hear a lot from marketers when they're trying to test and set up a real process for testing. And this is the first one and the problem is technology. How do you overcome that, Ryan?

Hutchings: So when I started into this a few years back when I really wanted to get into testing, I actually watched the MECLABS webinars a lot. For like six years I've been watching them. And I'd come up with the great landing page ideas but half of them didn't apply to my company, and then I would come to a conference like this or other things, and eventually I got to the point like, "Alright, I'm going to start running tests."

And you can use a third-party software like Google Optimizer or something like that, but then you're always kind of just one off poking at pages or processes, and it wasn't a really full program, per se, it was just random testing.

Even with the random testing, it doesn't matter how many you do, it didn't make sense all together. And then the other problem that I'd run into, which I think a lot of companies do, too, is you have these third-party tools, and it's like a single page tool. And so you get into it, and you're like, "Oh, let's optimize my website," and then you realize I can optimize one page at a time, and you're like, "That would take me forever. I can't optimize product pages one page at a time, that's going to help me, or search result pages, or things on a large scale."

Then you start realizing there are two tracks to that. If I want to go on a mass scale, I have some big solution, but I still want to do some one-off landing pages at the same time, and so that's how it kind of split into what we're looking at here. I came up with these two big buckets as a company.

You have to be able to understand that if I want to test something like in that small test or that large test bucket, whenever we come up with an idea, we're open to every idea, and then it gets into one of these paths.

Like every single test and every idea that anybody has goes into one of these paths that we have in the company, and then you just define where it sits in the funnel and that's how you bucket it. And then, depending on the bucket, that's what determines what type of testing program we're going to eventually use to run the test.

Burstein: So how many people in here have a full program of testing, a real process set up? OK, so a good amount of you. So take a look at how Ryan did this. We're going to walk you through both of these types of tests, and I think it might help you improve your own processes or start your own.

So let's first look at this small test track and here are the steps you take. First you decide what to test, and this is critical, because there are so many things we can test. How do you prioritize? Resources are limited.

Hutchings: Right. They are super limited and just so that everybody knows, you can see here, we are going to start getting into some details. My style of presenting is a shotgun approach of just massive data and information, so it's going to be less motivational speaking and more of just stuff on the screen and you just take it.

So what we end up getting here is, here's a spreadsheet, right? And we keep a manual spreadsheet in Excel that keeps track of every request for testing that comes in, and this is from everybody. This is like marketing team comes up with this, and this is also CEO, CFO walking into your room and saying, "Hey, I was on the website last night and this button was orange. Wouldn't it be better in green?" You smile and say, "Yeah, sure," and then you've got to decide what to do with that because you know they're going to ask you two months later, "I said green. What happened to green?"

So we log it all, and it all goes into here. This employee Colby, he's very organized, and so it allows us to keep track of it all, and then we also bucket in those columns.

There are certain areas of the website to where the test would apply to, and then where we are highlighting there, this confidence level where you see that 90% and 40%, we kind of assigned internally within the Marketing, which is me and Colby essentially, we look at it and we say, we're pretty confident that if we did a test here that we could get a fast gain. These other ones, I'm not so sure about. And that actually helps us to at least prioritize what tests would be quick hits, and which ones we're not so sure about, we might have to spend some more time thinking about. At least that gives us a starting point.

Burstein: So once you've decided what to test, you have to focus on a conversion goal, so a question we often get is what should my bounce rate be? What should my clickthrough …

Hutchings: Right.

Burstein: What have you found with bounce rates?

Hutchings: So bounce rates is, again, different, depending on it, but general rule of thumb, it's like I have this gauge, and anything over the 50% is like on death row, and we need to fix. Our target is always to get 30% or lower, as far as a bounce rate goes. And that includes all types of pages. It includes landing pages from paid campaigns. It includes organic pages, homepages, search pages, product pages, everything I want to get to that point. So anything that bubble's above, that 40% to 50% range is on alert for us.

Remember, we are still in the small testing mode here, so we are really still talking about individual pages at this point. Like bounce rate is a good example. That's going to be a good starting point for a lot of people when you talk about individual landing page optimization. It's like that probably would be the first thing that would come up and bubble up for a small individual page test is can I either do the balance rate or maybe like a clickthrough rate, and that's it. Just focus on one thing.

Burstein: Let's talk about clickthrough. What do you focus on for clickthrough?

Hutchings: So clickthrough rate, we find that we want to get at least 60% clickthrough rate from a page to the next step, so whatever that next step is. It could just be a go into search results and browse products. It can be view the actual product itself. It can be anything, but just step conversion, we can get at least 60% on anything that's performing well.

Burstein: So once you've decided about the test, you've identified a conversion metric, you create a hypothesis. How do you create this hypothesis?

Hutchings: So everything is based on a heuristic, and that's all we use. For little pages, individual pages, and anything big, too, that we'll get into. But everything is based off of that, so whenever somebody comes up with an idea, we say, well, if you want to change the button, or the color, or an image, or the title, or whatever you're going to change on the page, or everything, it needs to fall into something with that conversion heuristic, and it needs to be justified with that. So if somebody wants to say I want to do a button that says this or that or change the size of this, it has to draw back to this.

That's just a basic question most of the time. It's like, well, if you want to change that, are you going to impact the friction, are you going to decrease the anxiety, are you going to bolster up a value proposition? And as long as they can answer that question, or it's justified by one of those things in the equation, then we do have a hypothesis I feel like.

Burstein: And then by testing that way it actually helps you learn about the customer, right?

Hutchings: Yes.

Burstein: I mean you're not just making random changes.

Hutchings: Right. It's not so random at that point. Because a lot of things will get struck down if you can't answer it with the equation. That's what we use with everything, small and big.

Burstein: Then you actually get in and design the test. So let's take a look at one of your tests here. This is one of your PPC landing page tests we talked about in having to use your conversion heuristic. Tell us about some of the changes you're making here.

Hutchings: So this is, like for instance, a Breckinridge Ski vacation, going to Breckinridge Resort in Colorado. People would come in looking to find lodging for a ski vacation.

This would be like the landing page they would see to pay their lodging. And you can see these are totally different pages as you are looking at it. We look at it and say, OK, what do we want to impact, and we've identified a bunch of things, as you can see there, like the value proposition of our best rate, and maybe our sales, our destination experts is what we call them, to help you plan a vacation.

And some of those things that you don't see on the original page are hypothesis was simply if we present that to the consumer, will they clickthrough better, decrease the bounce rate, one of those two things.

Burstein: Yeah, it looks like you're also trying to decrease friction, decrease anxiety with some of those supporting elements. And as we talk about the last step in how you analyze those results, let's take at look at those results. It's pretty nice.

Hutchings: Yeah, that one was pretty nice.

Burstein: Someone's got to pay attention to that one.

Hutchings: Right. That one went straight to the CEO. Amazing, 427%. But individual landing pages you look at, and we did a massive redesign with all of these new things on the page, and a lot of times that what we do with these landing pages. It's less, at least in our organization, it's less about swapping out one single element, even though you can still see those big gains. We want to see something like this on a single page.

Otherwise, you start running into is it worth my time to see five more people clickthrough on a single landing page, because I got a 5% lift. I want big change, and I want to see it like that.

I feel like you can do that only if have, again, that heuristic guiding all of the elements that you're going to change on the page. Because if you don't have it, it becomes super random and you can isolate even what your methodology was getting to it in the first place.

Burstein: And then once you've actually seen the results, you validate them as well, right?

Hutchings: Yeah. So you can see here, third-party tool, I know you told me not to say who it is, but this is Optimizely.

Burstein: We're vendor-agnostic at this event. We're not promoting anyone.

Hutchings: I know. You're sweating right now. I told you I shoot from the hip. Listen, there's third-party tools like Optimizely and Google has one. Don't use Google. But reality is when you talk about single landing page tests, you want to use one of these third-party tools that will do all of the testing for you for a single page, because you can plop this in, run a single page.

It spits out the validation statistically significant even, however you want it to, tracks it all, and then you can just run a bunch of these at the same time, so that when you get into that, every single small test that we do just validates the tools.

So I don't have to spending anymore time running through the technical documentation or anything like that. Like literally, it's a few clicks of the button for anyone on the marketing team and it goes through like it should.

Burstein: So as I told you in the beginning, there's a lot of things that you can learn from Ryan, but really two core things that we get questions from all of time from marketers. One is the technology, which he's walking you through that now.

The other thing is how do you justify this to the CEO, to a client, to shareholders? How do you even justify what you are doing to yourself? Because testing is one of the many, many things that you can do. You've got this great ROI template, so if you could walk us through how you calculate the ROI on these tests and determine if they're worthwhile?

Hutchings: Right. And so I think that this is really the key that really turned the tables for us as far as making this into an entirely new process, but also something that everyone supports because I can get numbers out of this, especially dollar amounts at the end of the day.

So it took a little bit of time, but you have to build a spreadsheet and you have to be very intimately knowledgeable with all the metrics and levers that impact sales in your organization. And so you, as a marketer, if you don't know what those numbers are and those levers are, it's going to be impossible to build out an ROI model that's going to even be fairly accurate, that you feel confident in.

So here you see what we basically built out and spent about maybe a week on these models, these templates. My problem was I wanted something where I could just easily put in, "Here's what my gains would be maybe on a test, and I want a dollar amount spit out at the other end."

That's all I need. I need either sales, and what we actually do is contribution margin dollars, which the CFO loves, but you have to have all of these things that you assume and that you know will happen if you get X number of people through the funnel.

So I have a key with all of the levers that impact, once somebody comes into our sales funnel and these assumptions, and then it will spit out this, and you should be able to do that within your organization. If you can't, it's worth the time spending doing this upfront so then at the end of the day, you can have those numbers for you.

Burstein: And this is predictive in some sense based upon the test, but then you can go back and you verify.

Hutchings: So it's predictive. So I'll say I want at least 10% is what we usually do for small tests like bounce rate or click-through rate, and maybe up to 30%, 40%, and that's kind of what we do at the beginning and then depending on where it validates, I'll plug it back into the tool and see what it actually means dollar-wise at the end of the day whatever we validated. Yeah, it's certainly predictive, but it certainly helps a lot in justifying projects.

Burstein: Excellent. So now we are going to get into the large tests. Large tests tend to take more resources, so how do you get those resources to run those large tests?

Hutchings: So the ROI template was essential in doing that. Again, once you have an ROI template, you can look and say, "Look, if I were to increase the click-through rate from my cart to the purchase page, it would mean how much in dollars to the company?" And so you have an ROI template that would spit that out, and then it's pretty easy to stack five or six of those together and see what that means for your company.

And the large test takes development resources, so now we're of a small platform that marketing team has control over and then back into, well, how do I do this as an organization, to do a big huge test across dynamic pages and products, and things like that.

Burstein: You've got to work with IT. So let's take a look at now the process for those large tests. You have to decide, much like with the small tests, on what to optimize. You have that same spreadsheet there. But it gets a little different, because this is a look at your overall funnel. Do you want to tell us how that helps you decide what to test?

Hutchings: Yeah, so we created this funnel for my company, and the nice part is, again, I don't care about sharing the data. This is really how it is for the entire company. You look at the funnel and somebody comes in on the landing page, they see search results, there's properties that they look at. If they want to book it, they go to the cart, and then they checkout at the billing and they purchase.

What's really important for you ROI sheets, but then also understanding what your impact is going to be, is understanding that each step of that funnel, what that conversion rate becomes. You see here, if somebody starts on a landing page, and they don't get to some of the other stuff that I want them to, like that starting point has a 1.7% conversion rate at that level.

But as soon as I can get somebody down deeper, their conversion rate is changing. You know what value it is for a single visitor if you can get them further and further down in the funnel. So that's where it becomes a lot more valuable to you to decide where should I test, should it be at the bottom, should it be at the top, should it be landing pages, should it be other large pages? That's kind of prioritizing a lot of stuff for us.

Burstein: And you can see VacationRoost conversion rates starting at the top of the funnel there is about 1.7%, and this is also a question we often get — what is a good conversion rate? For the past nine months, MarketingSherpa has been working on an e-commerce bench mark study that was funded by eBay's Magenta Unit. We just released it on Monday, and here's one of the charts I wanted to share with you.

Looking at overall ecommerce rates, you can see they skew into that 1%, 3%, 5%. It's very rare to get above that when you're talking about overall e-commerce conversion rates to a final sale, a final purchase, or in your case, I think it's getting that person to actually call, or get very close to the …

Hutchings: Right, yeah. I mean obviously those people are lying that are on the upper half.

Burstein: Yeah, that's part of it, too. So if you see your conversion rate skewed to like a 1.7% …

Hutchings: Yeah, if there's like one person that's 100% there …

Burstein: That's where most people are. Yeah, well, I think Tim gets your reaction.

Hutchings: They were two for two or something?

Burstein: Yeah, maybe. Alright. Then working with IT, how do you collaborate and work with IT? What do you need from IT? What variables?

Hutchings: So for those, I mean there are a lot of companies that have robust testing platforms. For those that don't, essentially what I wanted to do is that I present them the problem. I need to be able to say, "I want to run a test on all of our product pages and change the 'Book Now' button from green to blue. If I want to do that, what would that take?" Well, what we came up with was we just developed our own. So you have to kind of, at this point, develop your own testing platform essentially, and it works on the server side.

So on our end, it's cookie-based so they come in, and every visitor that comes into our sites gets sent into a bucket. Either they are a control group or they are test group. That actually happens 100% of the time now.

So even if we're not running a test, technically somebody is a control. It's 100% of the people, but it gets set upon entry on the server side and then they visit the site, and their cookie duration is our other cookie duration, which is anywhere from 60 to 100 days, depending on what we're doing, and then they are in that mode forever. So they will always see a test version or a control version site, and it doesn't matter where on the site the testing control is, but that's how they get set.

So we set that as a variable. It's in your custom variable section, if you look at your code, and then you can know that it's working and test it that way. Then we also tied in our back-end reporting system, so we have some reports that generate sales reports, lead reports in our CRM systems, and we spent the time development-wise, which took a few months, to tie those two together as well.

So you have all of this stuff happening on the website, but then I also want to be able to, on the back-end, run a report that says, oh, the end line lead form test one control and test versions, how many leads and bookings and sales did I get from each of those buckets to be able to tie all of that back to an ROI and not just even have it on the front end.

Burstein: So talk to us about working with your IT department, because many marketers struggle. They get shipped up there, getting those IT resources, they can barely get stuff out the door. So from what we've seen, you have such executional excellence, did you really have to back up some of your campaigns to make sure that lead time is built into every project that you have?

Hutchings: Yeah, and when we set up one of these large projects, it's right now, because we spent the time to set up the system, it's takes about a week. So when we come up with the test, and that's a week to kind of launch and get it out the door with our IT resources.

So once we deliver it to them it takes about that long. The easiest part is these technical parts, because that doesn't take very long at this point, it's more so then developing the designs that we give them. But about a week is the turnaround time for us.

Burstein: OK. So, and you have to identify that target conversion goal again, and this is where your funnel comes in.

Hutchings: The funnel comes in again. So those rising tie tests are the large ones, again, that you have to have this custom platform for. You can see at every level of the funnel, those are the typical kind of conversion aspects that you are trying to impact, and you have to have those Dev resources for everything from search results down, as far as our ecommerce goes, because you have to have a large platform to do that.

Burstein: And your goal, like you said, is to 10% to 30% minimum. It's not worthwhile to do less than that, right?

Hutchings: Yeah, exactly.

Burstein: And then we can see kind of your overall results is that there's a steady rise in booking sales.

Hutchings: And this is since we've implemented this type of methodology. It's important to know, we actually always have dual tracks running.

That small and large test stuff, you can run simultaneously a lot of the times. So we'll maybe have a large test on, a product page running for instance, and then I can still be optimizing individual landing pages, like PPC landing pages or email landing pages at the same time, and that allows us to get a lot of the velocity, which impacts sales and conversion rates a lot faster than just one offing everything all of the time.

You can have IT working on one big project, but you as a marketing team can be doing these little landing pages two, three times a week, depending on your volume.

Burstein: Keep them going. So again, you have to create that hypothesis, set up that validation sheet. So you are estimating how long it's going to take for that test to validate.

Hutchings: Right. So MECLABS has their own stuff that they provide that does this, but these are tools that you can actually just Google. If you Google it, there is people that have A/B test validation templates already set up for Excel.

So if you just do that you can download these Excel sheets, and what they'll do is allow you to basically stick in any number, and it will spit out the number of days that you can expect a test to run for, given your level of confidence and how much traffic you have coming into your website.

So we just create spreadsheets with these tools already set to go, and when I'm ready to do a test, I just plug in the traffic that I can estimate for how long I'm going to run the test for, and it will spit out, well, you need to run the test for 40 days or 10 days, depending on the volume and stuff like that. It's all free stuff that you can find online.

Burstein: Let's take a look at one of those other tests. The first one we saw a 427% lift. This one you added some security seals to it.

Hutchings: Yeah, this one was a rough one. So you'll see here in a minute, this is a lead form. You know, fill out a web form and submit to us. We knew this would have a big impact if we could get a gain on it. So I thought it would be interesting to show that this is one that we actually failed a lot on.

Burstein: You shortened the form.

Hutchings: Shortened the form, like everybody says in these conferences. We're like, "Yeah!" you know. No.

Burstein: Gave them the answer to call.

Hutchings: We did that, and that one was based off of an actual MECLABS webinar that I watched that somebody did this layout. I'm like done, you know. No, that didn't do it either, that failed. We added all kinds of things and went back and forth, and we failed four times, so we did four tests that didn't work.

Burstein: But the next step, as you see, the next step is overall design and then you ran this test where you made many changes.

Hutchings: Yeah, and at this point, it was kind of a hodgepodge of everything. I had almost given up, because at some point, you put all of this effort into a lot of tests, and if they don't validate, you spend less and less time, the more tests you do, and at this point, you're going like, "Crap. Let's just, whatever, throw another one up."

And it worked. But every time, even when you get to that point, it's still based off of the conversion heuristic. Everything that we did was still based off of, "What am I trying to impact and improve or decrease in the form?" But it took five tries to do something like that.

Burstein: There is a really bigger lesson here. One, Ryan is a very kind guy to come up here. It would be very frustrating if you just came up here and saw everyone had 400% lifts all day and then you get back to work and you try theses and you have some of these fails, and he's very kind to come up here and share, "Hey, it ain't easy down in the trenches."

Hutchings: No. We had a lot of money riding on this one. We promised a lot, and it was a little rough there for a while, failing four times, and you're just like, "This sucks."

Burstein: The other key point is we often get asked, "Should I just run single factor A/B test, because really that's the way I know, OK, that one factor is the only thing that changes it?" If you're only running single factor A/B tests all of the time, it can be hard to make an impact, right?

Hutchings: Yeah, and so we find that we actually, like I said before, we actually don't do a lot of really single factor tests because, A, we don't have the volume where I feel it would pick up fast enough and, B, as long as we're following that heuristic, we find that every time we do something based off an heuristic, as long as everything is based off of that, we change all kinds of, like every time we do it, it's almost a radical design every time based off of the heuristic and it works.

So I think that is the key if you're going to do that all of the time. You've got to still stay pretty focused on a strict methodology.

Burstein: So after this session, when you go out to the wall there, and you see all of the different chances for optimization, if you're nervous about putting up your page, just know you're among friends, none of us are perfect. If Ryan is able to share from this stage his failed test, feel free to share in small groups.

Hutchings: Yeah, it's like golf sometimes. You're like, "Why do I play this game? This is horrible." And you get like one good shot, and you're like, "Love this game, the best thing ever."

Burstein: A good walk ruined, right?

Hutchings: You're like, "I'm going out for the PGA."

Burstein: So finally, you have to analyze those results, and overall, you have a spreadsheet you use to analyze.

Hutchings: Yeah, so every day we're doing a big test we log how many visits we get into the test on both versions, how many conversions in the control and the test group, throw up a chart. Again, this is just a manually created spreadsheet and template. But we keep this template for every single test that we do.

We do manually log it every day. There is no easy way around that. It's like every day we get in there and plug in the numbers and see where it's at. And then that will automatically go to the other sheet, that validation portion that you saw earlier, that will tell us as soon as it becomes statistically significant.

Burstein: And when Ryan is talking about logging every day, if you're unfamiliar with validity, you should look into that. Because as you can see, the conversion rates are changing every day, and it also helps with validity to understand how the conversion rates changes over time as well. It's not just the whole aggregate results you get, and you validate as well.

Hutchings: Yeah. So that sheet will automatically spit it out based off of the graph in the previous sheet that we created.

Burstein: You're logging all of these results. As we see, this is how your overall spreadsheet looks with the results, but this is what it really is here, the tracking through the funnel, right?

Hutchings: Right. So every test will, again, either fall into a large or a small test and then within those large and small tests, it falls into a piece of the funnel, and so we can, at the end of the day, again, we log all of these manually, and then we create a quick pivot table and say I've done 20 property detail tests and five search results, or whatever it ends us being, so we can at least see where we're also focusing our time in the funnel.

Burstein: And lastly, you have the same ROI template as you do for the small test, which is important for the large test.

Hutchings: Yeah.

Burstein: What are your top takeaways for other marketers that they can do this as well?

Hutchings: I think the ROI spreadsheet is one, having that template beforehand built out so then literally all you have to do is plug in numbers and it spits out dollar amounts at the end. That's like key number one.

Your reporting system should be tied in. You have to take a lot of work to get somebody from dev or IT that can make sure those numbers all work together at some point.

And then that funnel is also important to understand, because I think a lot of people get tripped up at the end of the day on, if I decrease the bounce rate on a single landing page, what does that mean for me? Before I would be just like, "I improved the bounce rate from 30% to 25%," and you'd be like, "OK," and that's far as it ever got. And that's really not going to help justify a lot of time.

Burstein: We have a question right up here.

Audience Member 1: You mentioned having two people sort of leading this testing. Are you each in charge of a set of sites, since you have multiple sites, or do you kind of both do all of them?

Hutchings: We do it all. So with two of us, it's me and this other gentleman, so we do all of the sites all of the time, and we're prioritizing that. So we either say we want to test all of the sites with one big thing or take one site and just do one thing on it, but we determine that, just us two. So I think another takeaway is that it's doable with the small amount of people. It's just a lot of time for somebody, but it's worth it.

Burstein: Any other questions for Ryan? Do we have one up there?

Audience Member 2: Hello. Earlier on in the presentation, you mentioned that you decided to build an internal testing tool for the more complicated large tests rather than using the system — I will not mention the name of the system. What was the reason for building it internally? I know that that system you're using enables you to test dynamic pages. It lets you do the things that you built the tool.

Hutchings: Yes. Some of it does, but depending on some of the complexities of our code base and some of the pages that we have didn't flow into the tool very well, like it would break. And I realized pretty quickly, that even with these third-party tools, like the complex pages, because of the code on the page as well as the dynamic elements, it just wouldn't work.

We spent all of this time trying to make these changes to pages that were so complex and a third-party tool that couldn't handle all of the Javascript and the special elements, that you're just spinning your wheels a lot of time and trying to send emails to their help desk or whatever and it just wasn't worth it.

Then when we internally developed, the other benefit to that is it's way easier to tie into your sales system for instance. Because we have a lot of offline sales that happen, so it's not 100% online. We have people calling us, and emailing us, and also booking online. So to have that tie- in was also essential so it was worth it to invest on that level.

Audience Member 2: That makes sense. And so the spreadsheet that you showed, the spreadsheet where you track all of the results, some of that data that you track in the spreadsheet, it's also a duplicate of what you can see in the testing platform in the tools. So do you maintain the spreadsheets for both complicated tests?

Hutchings: Just the large. Yeah, the small tests we don't do the whole manual spreadsheet thing because the third-party tool will just spit out what we need for there and then I'll just plug it into the ROI sheet.

Audience Member 2: OK. That makes sense. And the last question, I think. So you have the complicated tests, which basically you test funnels, and it's probably safe to assume that that funnel test will, some of the people who uses those isolated landing pages, they will end up in the funnel eventually. And so you may have a funnel test running at any particular point of time, but then you and your team are also launching landing page tests.

Hutchings: Right.

Audience Member 2: How do you isolate the two, I guess, so that the funnel test, do you know what I'm asking?

Hutchings: Yeah, I get what you're saying. So if we're doing a large test, most of the time, that's always at the bottom of the funnel. And the bottom of the funnel requires all of that splitting and stuff like that.

And all of our smaller tests are at the very top on the single landing pages. So for me, I can still get people on a single landing page testing a version of a landing page and still have them be tagged as a controller test for a piece further down the funnel. To me, it doesn't matter what happens once they get past that landing page, because they've already executed one version and they are into another set of tasks, but it doesn't interrupt the flow of what they would have done in the first place.

Burstein: We have time for just one last question over there.

Hutchings: I can hear you. If you yell, I'll just repeat it.

Audience Member 3: Do you have a dedicated IT for the testing or how does IT prioritize the testing versus all of the continuous development that's going on?

Hutchings: Yeah, that's a struggle. We don't. I'm not going to lie, that sucks, too. Because what we end of getting is still competing for the IT and dev resources. We're not a large enough company to where we have like five IT people. We have like a IT dev department, and there is like six of them. It's the whole company's priorities, so I'm always battling it out in other areas to get that resource.

The only thing is that ROI sheet at the end of the day has been vital for every single one of those. So when you go into those meetings and you're talking about doing something, and everyone always poo-poos on marketing, and you're just kind of like, "Well, no one else can bring dollars to the table like I can, because nobody else will spend time." So that's where those sheets come into handy for getting some of those resources is saying, "This will get this many dollars, so unless somebody else has something, sit down."

Burstein: So Ryan will also be at the cocktail hour tonight if you have further questions for him. I had the pleasure of working with him on his presentation the last few months. It's just not A/B testing, he's deeply knowledgeable about search engine marketing and lots of other topics. So please feel free to come and talk to him. Thank you, Ryan.

Hutchings: Thanks.