OREANDA-NEWS. April 18, 2016. Every email marketer knows testing is critical to success, yet not every email marketer knows how or what to test…or what to do with the results. Based on anecdotal evidence from what I hear around the ClickMail offices, I’d say most marketers struggle just to test subject lines. And subject lines alone are not enough. Period.

Several years ago, Morgan Stewart published a list of 101 things to test and his list is as relevant today as it was back then. (Although I suspect he could easily do an updated version because rapidly evolving email technology means we have many new variables to test.)

So why do so many marketers limit their testing to subject lines? I suspect it’s due to a lack of knowledge about the why, what and how of email testing. And there is a lot of room for error, with several common testing mistakes that we regularly see marketers making.

In this post, then, we offer an email testing primer to help you move beyond the simple subject line into testing that will give you much more data to work with to refine and optimize your email marketing.

The “why” of email testing

First there’s the question of “why,” meaning why are you testing? Your obvious answer might be, “to increase ROI, of course,” but that’s not really a specific enough answer to provide you with any kind of guidelines by which to set up your testing. You need to dig a little deeper into the “why.” And how do you do that? You start at the end.

Figure out what the “end” is by talking about specifics. Have conversations with your team. Move beyond simplistic statements such as, “increase open rates” to figure out what you want testing to do for you. Ask yourself and your team questions like, what are you going to do with the information you glean as a result of testing? What behavior do you want to effect among your subscribers? What is it you want to know? What are you going to do with the information you uncover? How can you get more actionable data from your testing? How will you use your test results? What application will they have?

This kind of digging deeper can be very liberating, helping you to switch your focus from the testing itself to the results you want to get.

The “what” of email testing

Next is, what are you going to test? With these conversations, you might realize that your testing needs to help you figure out the solution to a problem. Your email program might be suffering from something like:

  •       Low deliverability
  •       Low open rates
  •       Low click-through rates
  •       Low click-to-open rates
  •       Low conversions
  •       High spam complaints
  •       High unsubscribes

Maybe you’re not problem solving, rather looking for ways to improve (and there are always ways to improve email marketing). You can take the above list and flip it from negative (low) to positive (improve). In your discussions with your team, you might decide you want to: 

  •       Improve deliverability
  •       Improve open rates
  •       Improve click-through rates
  •       Improve click-to-open rates
  •       Improve conversions
  •       Reduce spam complaints
  •       Reduce unsubscribes

Or you might have other specific behaviors you want to change or results you want to see. Determine what they are. 

The “how” of email testing

Ironically, many marketers have yet to master it and email testing will only get more complex because interactive emails are now a reality. Today’s emails can have carousels, tabs, rollover navigation bars, accordion navigation in mobile emails…boom! All of a sudden you have all of this other stuff to test too. It’s both awesome and crippling.

Yet that’s not the real challenge—yet. The real challenge right now, and one that must be overcome before you even start thinking about testing anything complex is the challenge of staying on task. And this is the “how” of email testing that you’ll want to master first.

In our experience at ClickMail, we have watched marketers go from the simplistic subject line test to the other extreme: testing multiple factors all at once, and ending up overwhelmed and confused as a result, unable to differentiate which variants are causing which effects.

To avoid this trap of too much testing, we suggest your put together a learning agenda to keep you on task. Be very clear in determining what it is you want to learn and test only for that. Once you’ve learned that lesson, you can move on to the next.

For example, consider a clothing company that wants to test if gender-specific images perform better (i.e. sending males emails showing male models and females emails with female models). At the same time, this company wants to test geographically specific subject lines, such as “Find Brand Z Jeans in San Francisco.”

If they test both the image and the subject line at the same time, how will they know the effect of either variant? Instead, they should have an agenda. First, they want to know if different genders in their images impact the effectiveness of their emails (and we’ll say more about “effectiveness” below). So they test for that, learn, and adopt the results as a best practice. Next on their “learning agenda” they want to know if geographically specific subject lines make a difference. So they test for that, learn and adapt. And then their learning agenda will have a third question to answer and so on.

Getting confident about testing does not have to equal getting crazy about testing.

What does success look like?

As you determine why you’re testing and what you’re going to test for, make sure you know what success looks like. This is the effectiveness mentioned above, and it should probably be part of the learning agenda you put together so you’ll know how to recognize your results.

Quite often, the so-called success metrics marketers look for are based on global standards that have little relevance to a particular industry or market. Your definition of success, therefore, should not be based on these kinds of standards, but on your own. Go back to the lists of potential problems and improvements above. How are you going to measure for each of those? Against your historical performance, that’s how. Is your open rate higher than before? Then that’s something to note and to benchmark.

You must also keep in mind that results are not always clear-cut. For that reason, you must be sure you are paying attention to the big picture and tracking the results beyond just the one factor you’re testing. For example, let’s say subject line A outperforms subject line B, resulting in a higher open rate. Don’t assume that makes subject line A the winner. Take a look at the other metrics. What happened to the click-through rate? Did it go down? Then your subject line might have tricked people into opening an email and the message didn’t deliver on the promise.

An email is made up of so many tiny parts, all of which add up to a whole, but all of which should be tested one at a time so you’ll know which changes cause a different result, whether good or bad. Getting a handle on the why, what and how of email testing can help you have the right questions, focus your scope, and know when you’ve got a winner.

About The Author:

Marco Marini has been at the forefront of email marketing since its inception as a channel. Prior to co-founding ClickMail, Marco developed pioneering email campaigns for CyberSource, eHealthInsurance, DoveBid and IBM Canada while holding key marketing roles in those organizations. Clickmail is a part of the Salesforce Marketing Cloud Partner Community in both the Channel and HubExchange programs.

Want more on the State of Marketing in 2016? Download the free report!