In this webinar, you'll hear from Amelia Showalter, who headed the email and digital analytics teams for President Barack Obama's 2012 presidential campaign.
Showalter discussed with Daniel Burstein, Director of Editorial Content, MECLABS, how the team maintained a breakneck testing schedule for its massive email program. Take an inside look at the campaign headquarters, detailing the thrilling successes and informative failures Obama for America encountered at the cutting edge of digital politics. More than $500 million was raised as a result of this campaign, with 4.5 million online donors.
In regards to big data, Showalter explained the team at Obama for America used "that data to make an experience more personal, because when you're talking to someone in-person, you remember things about them, and so I think that's where the data can actually make things more personal, not less."
In this webinar, you'll learn:
- Why subject lines mattered so much, and the back stories behind some of the most memorable ones
- Why the prettiest email isn't always the best email
- How a free bumper sticker offer can pay for itself many times over
- The most important takeaway from all those tests and trials
Download the slides to this presentationRelated Resources
Email Testing: How the Obama campaign generated approximately $500 million in donations from email marketingOptimization Summit 2013 Wrap-up: Top 5 takeaways for testing websites, pay-per-click ads and emailEmail Marketing: 77% of marketers use website registration pages to build email lists%REGISTER%
Video Transcription
Burstein: Hello and welcome to a Marketing Sherpa webinar. Today we have an especially interesting story for you today. If you have attended Marketing Sherpa webinars before, what we do is, we bring in marketers from all across the world, B2B, B2C, all different types of marketers, bring them here, and ask them your questions. Well, we have a really surprising marketer today, because when Marketing Sherpa looked all across the world for our optimization summit, I was trying to find some of the most interesting optimization studies, we found it in a very surprising place.
As you look at this title, and we talk about how A-B testing generated more than $500 million in donations, there’s very few campaigns that are quite that big, especially when we talk about donations. But this is one that every one of the callers is probably exposed to. It is the presidential reelection campaign, a major campaign we’re getting the inside story today. Now, I also want to warn you, hey, we know, it’s a political story, but for just today, let’s take off our Democrat hats or our Republican hats or our Independent hats and see how we can learn from a fellow marketer, and that fellow marketer, if I can call you that Amelia, is Amelia Showalter. She was the Director of Digital Analytics for Obama for America. Thanks for joining us today, Amelia.
Showalter: Oh, of course.
Burstein: I’m Daniel Burstein, the director of editorial content for MECLABS. I’m in our Jacksonville Beach, Fla. office right now, and Amelia is joining us from San Francisco, where incidentally we’ll be in a few months for Lead Gen Summit. How is San Francisco today?
Showalter: It’s great, I just flew in this morning and its blue skies and sunny as can be.
Burstein: Well hopefully that’s a nice omen for this webinar, blue skies and as sunny as can be. And I want to invite you to participate in this webinar too, we are asking your questions to Amelia, we got many of your questions before this webinar that we pre-populated the slides with. We’re going to be looking, we have an entire slide deck of Amelia’s keynote from Optimization Summit 2013.
We’re not following any order; we’re going to hop around to all different slides based on whatever questions you ask, so we can answer as many of your questions as possible. You can ask those questions through the Ready to Talk platform, or you can use #SherpaWebinar to ask your questions, and also to impart your own advice, on a/b testing and email. Through hashtag sherpawebinar I will also be sharing a few other pieces that we've created with Amelia Showalter, along with Toby Fallsgraff, another member of her team. We have some case studies, we have some video, some information on their talk. You can see those resources right here, but we’ll be including those on the hashtag as well.
So with that, let’s jump right into it, and Amelia, so you had a pretty darn big challenge, right, when you started with the Obama campaign. In 2008, you guys raised $750 million. You knew that would not be enough for 2012, so we know that A-B testing of email was clearly important, looking in hindsight. We have a question here from Tulane, who’s a consultant. Tulane asks, “What strategy changes have you made from the beginning of the first campaign until today, as a result of the changes in the digital landscape?”
Of course, today the campaign’s over, but I want to ask you, you know, when you started in 2012, you know, the digital landscape had changed a lot since 2008, so what were some of the factors that you kept in mind, and why did you chose to do what you did?
Showalter: Sure, well, I mean a lot of what we did was actually just building on what we did in 2008. I mean, certainly in 2008 there was some level of A-B testing and segmentation, but really what we did, we took it all to a new level. You know, I think a classic example is in 2008, when we were asking people for money in an email, you know, people would get these sort of personalized asks. Some people would get asked for $5, and some people would get asked for $50, and so they would have, you know, maybe four or five different buckets like that in 2008.
But in 2012, we actually, we worked on a very complex formula to determine the best ask for each person, and basically that ended up leading to hundreds of buckets for people at all the different levels. So, I mean, it wasn't as if it was a different concept from 2008, but we just sort of took everything to the next level.
Burstein: So it sounds to me — segmentation. You guys just blew segmentation out of the water and took it to a whole new level.
Showalter: Yeah, I mean particularly in the ask, you know, in segmenting people based on their donation history, we did a lot of that. Although I will say, you know, we also, people thought that the campaign did a huge amount of segmentation, and based on demographics or other sort of, you know, creepy variables that we know about you, and that’s actually not the case.
Typically, you know, when we tested messages, there were many different versions of the same email, we would send them out to many different small pieces, randomized pieces of our list, and then we would send that winning message out to everyone, and we would find that the message that would win, would win among all different demographic groups.
So we actually didn't micro-target our email program all that much. We did do some little tweaks based on people’s past behavior, like how they’d been involved in the campaign, if they had donated before, if they had volunteered, you know things like that, but in terms of segmentation, it actually wasn't this, you know, crazy, uber micro-targeted [inaudible 05:41] that people think we did.
Burstein: Well that’s interesting. I think a lot of people think that because direct mail for politics seems to be very segmented. I know, like I said ... I’m here in Jacksonville Beach, Fla. It’s a swing state, I get a lot of direct mail from political candidates, and I can definitely see how they tie into certain of my interests. So, why did you choose not to micro-target and segment?
Showalter: Well, I mean we looked at the data, and it just, you know what we found is that, you know, a lot of the variables that we had for some people didn’t matter as much as finding that good universal message that works for everyone. I think the major thing, you know, my background actually is in micro-targeting, and before I did digital work, I was doing micro-targeting models.
But digital, you know, we’re just communicating with our supporters. These are people who have already signed up to receive our emails or have donated before or are interested in the campaign, and so it was a very different group. It wasn't, we weren't using our email program or our website to persuade in the same way that direct mail is used to persuade voters.
Burstein: OK, well, let’s kind of start at the end if you will, and look at the results of this campaign, and so, obviously as we know, for the ultimate KPI, congratulations, you were successful, the president was reelected, but let’s take a look at some of the results. Can you tell us about some of the results from the campaign?
Showalter: Yeah, so, we raised more than half a billion dollars in online donations. That was about half of the campaign’s total, so, you know, we knew that $750 million total wasn’t going to be enough in 2008. The 2012 campaign was more than a billion dollars, and about half of that was just from the digital department, from our efforts.
Burstein: And I believe when we were talking at
Optimization Summit, you said maybe $200 million could be directly attributed to the improvements from A-B testing, is that correct?
Showalter: Yeah, that’s just a really rough estimate. I mean one thing when we’re working on the campaign, we're actually working so hard to run all those tests that we didn't always keep perfect track of exactly, you know, what results were long term, you know, it’s hard to calculate this stuff out when we want to put all our resources into running more tests, so we don't actually ever have a perfect estimate of actually how much extra revenue was due to our testing, but I think that $200 million is a fairly reasonable estimate.
Burstein: Well, speaking of measurements, we have a question here from Jennifer, she is senior manager. She wants to know your thought on measurement. What is the best metric? Is it clickthrough rates, is it conversions, what do you think?
Showalter: Well, for ads we always went directly to donations, or at least if we were looking at donation emails. You know, we might look at the opens and clicks, but that was never how we made our decision. You know, usually we’d send out between 12 and 18 different variations of an email before sending the winner to the remainder of the list, and we were just looking at donations.
I think it’s nice to know about clicks and opens, but we want to go directly to the end goal, which for a lot of our emails was donations. And you know, we used the same method if we were trying to, for instance, get people to volunteer. We would go directly to volunteer sign-ups. So we would have a sign-up page, and rather than just sitting back and looking at how many people click through to that sign-up page, we actually would look at how many people filled out the form and committed to volunteer.
Burstein: Let’s take a look at some of those tests now and so, in some of those tests you had some small, incremental gains, one or two percent gains, and in this example a five percent gain, you know, those are always good. We had a question about how you tested and who you tested. Patricia, a senior director of development wants to know if there was any pre-qualification before establishing the a/b segment. Did you rule out the very bottom, for example? Or was it totally random?
Showalter: Well, for the Web, it was really just whoever was coming through to that particular page. For email, we did actually eventually stop running tests to our non-donors, people who had never donated before, only because they usually wouldn't respond quickly enough. Obviously, in any given email, some people who had never donated before would then convert and then become donors, but, you know, they might not necessarily respond quickly enough for us to make a decision, so there were some people that we cut out of testing.
I mean another thing, it’s funny that I’m on the west coast right now, I’m actually originally from the west coast, anyhow, cutting out for early morning tests, we would sometimes cut out the people who were on the West Coast from those test groups, because if we were sending something out at 7:00 a.m. Chicago time, that’s 5:00 a.m. west coast time, and if we have to make a decision within an hour, you know, the people that we’re sending to on the West coast aren't necessarily going to be awake. And so, if they're not helping us make a decision, we might as well not send them all these different variations, most of which are going to be, you know, middling, and will save them just for the winning message.
Burstein: OK, another question about how you tested. So if we look at another test you did, it’s interesting, because you tested almost everything, and here you can see on the screen some of the unsubscribed language that was tested. And we had a question about how unsubscribes affected
A-B testing. Kim, who’s a senior marketing manager wants to know, "Please include any insights as to how inbox delivery versus spambox plays a role in a/b testing and winner selection." So I wonder if when you looked at your results, did you look at any metrics like inbox placement or delivery ratings or anything like that?
Showalter: Not so much, I mean unsubscribes would be part of it, or bounces, it would be a metric we’d look at, but it wasn't, I think a campaign has less reason to be concerned about some of these things because it’s a finite venture. You know, our campaign was going to be over, no matter which way the election went.
And so, you know, looking at things like unsubscribes, we want to prevent unsubscribes whenever possible, but it’s maybe a little bit less of a major metric for a campaign than it would be for other organizations. And then in terms of spam, you know, one thing that was very helpful about our switch to testing mostly to donors, people who had already donated before, was that they were much less likely to flag us as spam, and they were our very best supporters, so some of that maybe wasn't always intentional, but it did seem to help.
Burstein: So, as we talked about, you tested many different elements. In this example you tested, I believe, the name that was sent to, and also the amount they donated. Some of these are some really advanced tactics. Elizabeth, who’s an executive director, wants to know, where the best place to start is, you know, using baby steps. Where would you advise someone to start with a/b testing?
Showalter: Well, I do think that, you know, you do want to take baby steps, and just dividing your full list into two pieces is clearly the easiest thing to do. So obviously when you divide your list into two pieces, you're not going to have any list left over to send the winning version to. So what you want to do is, start with things that will be useful on the next email that you send, for instance.
So, rather than testing things like graphs and subject lines, you know, which are sort of more ephemeral, and aren't necessarily going to produce useful information for the next one, maybe you want to test the formatting of your message. You know, on the Obama campaign, we found that cleaner emails were much better than highly formatted, graphics heavy emails. You know, occasionally we would send out graphics, and at times we’d send out animated GIFs, but in general we found that just plain white background, you know, not a lot of this extra formatting was better.
And I actually think that, you know, maybe people have started to tune out the sort of highly formatted emails, and so that might be a good place for people to start. For one because it’s an interesting thing to test, but also because it’s something where if you just divide your list into group A and group B, and test your usual email that might have a colored background, you know, and special fonts and graphics and stuff versus a much plainer email, you’re going to learn something for the next time around.
Burstein: Excellent, and one other thing you learned, and I think this is one of the most interesting tests, because this is something we get the most questions about, is about how much email you should send. Even you did a little I know on when you should send. So let me ask a few questions we had from our audience about this right now. Maybe you can enlighten us about some of the things you learned.
Justine, a director of development asked, "Does it matter what time of the day emails get sent out? Or what day of the week? What is the maximum of emails an organization should send out in a month, and what’s the minimum?" Katherine wants to know, "Can we send too many emails," and Patricia, the always intriguing time of day, day of the week issues. So I think you did some testing to try to get to the heart of both of these questions, right?
Showalter: Yeah, so we did a little bit of time of day testing, and just never found it very conclusive. We basically found that sending really late at night or really early in the morning is a bad idea, but other than that, we just didn't get much out of it. You know, in fact we tried sending to people who had donated at very specific times of day, and only those times of day, we tried sending emails to them at their preferred time of day, and that didn’t actually help increase donations.
I mean, we did test all these things, and my main conclusion is that time of day testing should be pretty low on your priority list. I mean, if you have a finite amount time and staff resources, there’s just other, more interesting things to test than what time of day to send, is my opinion.
In terms of day of the week, you know, we didn't really test that because we were sending pretty much all the time, you know, we sent emails pretty much every day, so our calendar was much more determined by the political calendar, so there wasn't much of a, we weren't really in a position to say, well, okay, let’s try sending this email on Tuesday to half the people, and then let’s send an email to the other half on Thursday. It just wasn't something that we could do.
But what we did do is this, as you see on the screen, we did a longitudinal test that we called the "More Email Experiment," where we selected a group of people to receive additional emails, and it just turned out that sending more fundraising emails got us more donations, and unsubscribes, we also had more people unsubscribe, but it wasn't as if it went out of control, it’s not as if, you know, sending twice as much email you’ll get four times as many unsubscribes. And basically, what we determined is that at least for our campaign, may not be true for everyone, for our campaign in the months that we had left at that point, it was better to just send a lot more email, and I think it really helped us to the tune of $20 to 30 million in revenue.
Burstein: There is no upper limit to how many emails people want from President Obama, and I love that your team put together this shirt, "Four more sends," based on that.
Showalter: Right, right, because, you know, four more years was a rallying cry of the Obama campaign, and so when we determined it was better to send more emails, "Four more sends" became our battle cry.
Burstein: Beautiful, so let’s get into subject lines a little, and we have a question here from Jamie, who’s in operations, and we’re going to look at some of these tests, but if you could talk to me at a high level, if there’s any clumping together you can do of your subject lines? What worked and what didn't with subject lines of your emails? So I know we’re going to talk about some specific subject lines, but is there any types of subject lines that tended to work better than others?
Showalter: Well, we tended to find that shorter subject lines worked better, that sort of less formal, more personal emails worked better, subject lines, I mean, you know, in this particular case you’ve got up on the screen, you know, the subject line "Hey" had been doing really well. We used that a bunch of times, from the President. This is the only time that "Hey" actually lost, and it lost to "Name," meaning like the person’s first name was inserted as the subject line. And that I guess, you know, felt more personal.
Now I think this is maybe overused and I’m not actually going to recommend this, but putting a colon at the end of the subject line, no matter what it is, tends to get people to open it. I kind of feel like that is going to get played out pretty soon, because everyone is doing that. So there’s a few little tricks.
We did for a while, we tried a few of the special characters, little, you know, icons for airplanes or sunshine or something like that, and it sort of works, but it doesn’t work every time, and even though shorter subject lines tend to work, sometimes longer ones would work better. Basically, what it meant is, you know, when we tested these things out and got a sense that something was working, it’s not as if we would shift entirely to short, informal subject lines. We would still test out, we might just change the mix, so if we were testing out ten different subject lines, maybe we’d have a few more of them be shorter and informal, but we’d still keep other possibilities in the mix, just in case things changed.
Burstein: That’s interesting. I was just going to ask you as a joke if you tested special characters, but my hats off to you that you literally tested everything. So there’s a question here from Jeff, a content marketing specialist. "Is there any such thing as over-testing? I read the Obama campaign would test three subject lines, if you have a big enough sample, why not test 10?" And so I just want to tell people what they're looking at right now. This is a test we're about to look at where you had three different subject lines you tested, but across six different versions of the emails, for a total of sixteen treatments. So, to Jeff’s question, I’m sure you had a massive list. I don’t know if you can share how big, but how would you decide what limit to how much you would test?
Showalter: You know, I wish I had the perfect answer. I mean, I think a lot of it was that, you know, we had a certain number of drafts and subject lines that could get written in the amount of time that we had, and we had a huge team of email writers that were writing all the time, so there was an actual upper limit to the amount of content we could produce. You know, it is a good question. You want to test lots of different options, and we certainly found that we weren't very good at predicting which option would win, so it was good to try a lot of different things, but you also want to have enough list left over to send the winner to.
So, I don't have a great answer for that. I think it’s sort of like, we sort of, I don’t know, came to a good, it just sort of felt like the right amount, and that ended up being about 20 percent of our list, but then that 20 percent would get broken into many small pieces, and for all the different versions.
Burstein: OK, as we said, for this example there’s six versions, three subject lines. I'm not going to read each version, but I'm going to briefly click through each version. So it shows up in the YouTube replay, and you can also view the slides here if you want to view each version in general, and then we’re going to show the results, and we're just going to show you the winners, so you get a sense of what the overall winner was.
But first, I did want to thank our sponsor of this webinar, Acton, I'm going to show the results in just one moment, but let me tell me that Acton software is the leading provider of cloud-based integrated marketing automation software. It was recently named to Forbes America’s Most Promising Companies, a list of 100 U.S. based privately held high growth companies with bright futures. Companies of all sizes turn to Acton to execute multi-channel online demand generation and lead nurturing campaigns by automating critical marketing tasks and providing rich analytics and reports in real-time. Acton’s 1400 plus customers range in size from small and mid-sized businesses to departments of large enterprises across all major industry verticals, including technology, manufacturing, healthcare, and finance. And you can learn more at acton.com. That’s act-on.com.
Also, I do want to let you know that Amelia was a keynote speaker at Optimization Summit 2013. We are just announcing for the first time, you are the first audience to hear this,
Email Summit 2014, one of the biggest email events in the industry, will be in Las Vegas at the Aria Hotel, February 17th-20th 2014. Save the date, we do hope you join us for that.
And now, as promised, here are the results. So, the winning team I believe did $2.2 million additional revenue from sending the best draft versus the worst, right? I mean, this is a pretty significant difference.
Showalter: Right, and even if it was just average, even if we just chose one at random, you know, it could be a pretty big difference.
Burstein: But for this one, the winning treatment, just so you know, and you can again look in more detail through them, where the name was the actual subject, it was personalized for the name, and it began, "This is my last campaign, and I’m ready to give it all I’ve got." So do you have, I don’t know if you guys create a hypothesis in the beginning and then at the end of the test you say, “Here’s why this one won,” or you’re just moving very fast and just tried, OK, we just push this winner out and then we move on to the next winner.
Showalter: No, I mean we learned pretty quickly to not overthink things. I mean, some of this really is ephemeral, you know, and we would have draft language that did really well one day, and we would test out something really, really similar a week or two later, and it might not perform well at all. There’s something just sort of ephemeral about it. And, you know, there were some things that we kept testing, and they would win frequently. You know, for most of the summer of 2012, the message that we were going to be outspent by the Republican side, that was very effective. But eventually, that stopped being effective, and so it’s good that we kept testing against other things.
Burstein: So, speaking of ephemeral, your subject lines became so popular that this meme got going where people added the subject lines to pictures of Ryan Gosling, right?
Showalter: Yeah, there’s a Tumblr that was started with our subject lines with Ryan Gosling photos.
Burstein: So, we wanted to know, sure, the president, he’s kind of known as somewhat of a hip guy, one of the hipper presidents of the United States, and so, yeah, a subject line like "Hey" might work for him, but Luke Thorpe, who’s actually our A/V director here, he’s managing this call right now, he mentioned in one of our meetings, "Well, would 'hey' work for us?"
And so, we had to know, would a subject line "Hey" work for a company like us, a bunch of marketing researchers who are anything but hip like the President, and I certainly can't play basketball like him. So we tested it, we tried it out. So, Amelia, this is the little surprise that I wanted to tell you about. Me and Amelia prepped for this call, but I don't want to share this, I want to get her reaction to some of our own humble tests, so here’s our first test. Well, thank you, we wanted to see our first test, a typical subject line, a live webinar about how A-B testing generated $500 million in donations, and we just changed the first line, “Come see an inside look at how this happened.” The second subject line was “Hey,” and we said, “Tune in this Wednesday to see why we used the ‘Hey’ subject line.”
Now, let me tell you, Amelia, my own hypothesis before this, when Luke had that idea, I wanted to do it, because I thought it wouldn't work, so then, live on the webinar, I could give you a hard time and say, “Hey, your subject lines aren't working,” as a bit of a joke, but the bigger lesson for the audience being, when we’re talking about a different subject line or a different specific test that worked for Amelia and the Obama campaign or any specific marketer, we’re really less talking about, “Hey, do this specific thing right in the same way,” we’re trying to show you, and hopefully we've done that in today’s webinar, how Amelia and her team learned about what was working and what wasn't, and the process they used to push that out into their emails, so you can do that for you.
So again, my thought was that this would not work at all for our audience, and so I can make that case and give you a hard time Amelia, but, it worked really, really well.
We did two experiments, and you can see the numbers on the screen there, we got a 33 percent lift in open rate of number one, we got a 50 percent lift in opening the other, we got over 200 percent increase in click-through on one, and these are results at a 99 percent level of confidence, just a quick little experiment.
So, while my original lesson was, you know, you’ve got to test, because the exact things Amelia is telling you won’t work for you, well, maybe they will. But I guess the other, bigger lesson I really wanted to make is that you can’t guess at what will work, because that’s why we test, right? I think you had that challenge too, Amelia, right?
Showalter: Sure, oh yeah, sure. I mean, there were things that I sort of, you know, I thought I have a great idea, I’m sure this is going to work, I’m going to be doing, I’m going to be the hero because I’ve thought of something brilliant, and, then my idea totally failed. And that was fine, I’m not so egotistical that I can’t take the hit to my ego, but you know, hopefully we would learn from it and say, all right, why don’t we do opposite of what I just did? And let’s try that out instead.
Burstein: And so we have a question, I think it ties in well. David wanted to know how many staff members manage the actual email building and sending? So do you want to tell us a little bit about your staff and, you know, not who was better or who was worse at guessing which emails would work out, but, no seriously, who was kind of involved in building this?
Showalter: Yeah, yeah, I’ve been seeing questions in the chat about staffing. Yeah, so there were 15 analysts on my team, and they had various skill sets, it was a lot people with a stats background, or a database background, not people necessarily who worked in politics before, or even in marketing. We had a couple of people who had done marketing before, but really I was just looking for smart, quantitative people, and we did a lot of one the job training.
Now on Toby’s team, he was the head of the email team, and they had 18 email writers and 4 people doing social media, and I think having all those different voices was really good because, again, we tried out 4 to 6 different drafts for every national email, and so, just to be able to have differences, I think you have to have different people to have different voices.
And those are just two teams within the digital department. I mean, the digital department itself had over 200 people by the end of it, and some of those people were doing online ads, you know, creating our website, doing our video and YouTube and all of that kind of stuff. So we had just a huge, huge team, just in the digital department, and it was really great to have that many people on hand.
Burstein: So we’ve got about two minutes remaining, let’s take a look at a few of your top lessons here. I mean, one, you said, I mean, for you, as I think we’ve definitely reiterated many times in this webinar, testing clearly wins. I mean, these are some impressive numbers beyond the money raised, you helped recruit two million volunteers?
Showalter: Yeah, I mean, it was the same exact procedure, the same testing methods.
Burstein: Yeah, and you also mentioned the use of data. You were not so worried about using a lot of data with your audience. Why is that? You said big data does not equal big brother.
Showalter: Yeah, I mean I think that this is an issue, you want to make things personal, you want to use the data that you have from people, and again, these were our supporters, these are people who had signed up for our email list, they want to be involved, and so, you know, we would do experiments sometimes to look at how to personalize things, how to say, you know, if we sent out an email to people asking for their money, you know, and we had a group that had already donated recently, maybe a drop a little extra line in there just to say, "Thanks for the recent donation." Sort of acknowledging it, or acknowledging that they had volunteered, or they had signed Barack’s birthday card last year, and would they like to sign it again this year?
So using that data, it will actually make an experience more personal. Because, you know, when you are talking to someone in person, you remember things about them. And so I think that’s where the data can actually make things more personal, not less.
Burstein: And speaking about personalization, we had a question here from Michelle. "Which test provided the most significant difference, example subject line, layout, call to action, placement, etc.?" And I think I might be guessing wrong now, I’ve just got one minute left so I want to ask you this, I think the biggest test was personalization, right? That had the biggest impact on what you did, tying it to the personal amounts that donors gave and those sorts of things, right?
Showalter: Yeah, I mean it sort of depends on your definition of impact. You know, probably running that more email experiment, you know, just to be able to change our policy to send more email, that may have had the biggest impact in terms of sheer dollars. But yeah, personalization sometimes would have a huge percent increase. So you know, among the people that we were doing that personalization to, I think in this particular case, the slide that we’re looking at right now, I think that actually doubled the donation rate, which, that was pretty unusual. Not all the
personalization had that great of an impact, but it was, you know, pretty amazing.
Burstein: Well, this has certainly been a pretty amazing case study Amelia, thank you so much for joining us today, I know you’ve been traveling all over the country, so it was really hard to be able to fit this into your schedule. Thank you very much for taking the time.
Showalter: Of course, thank you very much.
Burstein: Thank you everyone for tuning in. We hope to see you at Email Summit 2014, also, if you can, when we close out this webinar, there will be a survey. Please fill out the survey and let us know what we can do to improve these webinars for you. Thank you.