A successful nonprofit email has many parts. It has a subject line that draws attention, a message that sparks passion, a layout that’s easy to navigate, and a suggestion/call to action that inspires.
At each of these points, you make choices:
- What subject line will you use?
- What message will you share?
- What suggestion will you make for next steps?
- What will the email look like?
And at each of these points, your supporters also have choices. They can decide:
- to open the email or not
- to keep reading or drift away
- to click on “donate” or close the email
What makes them say “yes” or “no”? Why do some emails perform better than others? What drives engagement?
You can try and guess, but you’ll get much better insights if you experiment. A/B (aka “split”) testing is a tool you can use to create the best version of your email, based on data instead of hunches.
As you learn what your audience responds to, you’ll be able to craft more and more relevant communications for them.
What Is A/B (Split) Testing?
A/B or “split” testing is a controlled randomized experiment that shows how changes in your content or layout can drive engagement. It’s a way to show which of two versions of a piece of communication is most effective.
To conduct an A/B test, you create two versions of the content, which are sent to a randomized subset of your audience. The version with higher engagement is then sent to the rest of your audience.
Let’s say you want to test your email’s subject line. Start with your original, which will function as your control:
Subject Line A: Give a Kid a Book!
Then try something else:
Subject Line B: Give the lifelong gift of literacy!
Now, we’re set to test one specific thing: Do people respond more to the idea of giving a singular child a book, or the bigger goal of literacy?
Next, use your email platform tools to send the emails to a randomized (that’s important) subset of your audience.
Then, look at your results. Which version performs better? That’s the one to send to the bigger audience.
What Questions Can Be Answered With A/B Testing?
There are a lot of elements that contribute to an email’s success, and you could ask many different questions. As a tool, A/B testing is best at answering specific “this or that” questions, like:
“Do more people open an email with their name in the subject line OR without?”
“Do people click more often on a red button OR a blue one?”
“Do more people make gifts when the donate button is on the top of the email OR the bottom?”
“Do more people make gifts when we tell a story about a cat OR a dog?
It’s not good for answering questions like:
“Why doesn’t this email perform, in general?”
“Do more people click on this email or this completely different one, in which I’ve changed every element?”
How Do I Conduct an A/B Test?
Many email platforms have options for A/B testing, including Virtuous. Make sure your A/B testing tool allows you to select your sample size and executes randomization.
Watch Senior Product Trainer Rachel Specht conduct an A/B test in Virtuous in this video from the Responsive Nonprofit Summit. (The Virtuous demo portion begins at 10:48, but don’t miss Rachel’s great overview of A/B testing at the top)
Why Should I A/B Test My Nonprofit Emails?
You can A/B test more than email, but email is a great place to start, since it’s flexible, relatively inexpensive, and you can get results and make changes quickly. Once you’re confident split testing your emails, you can try testing other kinds of communication.
Testing is about listening and learning, both important parts of The Responsive Framework. Responsive fundraising relies on listening to your supporters before trying to connect with them or make a suggestion of a next best step.
There’s a lot of noise out there, and it’s increasingly challenging to cut through it to attract your supporters’ attention. Irrelevant or uninteresting messages simply won’t make it. You need to use all the tools in your toolbox to make sure your important messages actually get to the people who are most interested in them.
Take a look at your email response rates. What signals might your supporters be sending? Are they clicking on links? Unsubscribing from your list? The way supporters respond to your emails is important data.
Rachel Specht, Senior Product Trainer at Virtuous, says, “[Nonprofits] all have different missions and groups of supporters that want to support your missions. Knowing what specifically is going to reach them is going to be important.”
When you regularly A/B test your emails, you:
- Get more insights into what drives engagement
- Aren’t risking much, since the majority of your audience will receive the most effective email
- Are able to listen more carefully to your supporters so you can be more responsive
Want to be more responsive? Get your free copy of the Responsive Fundraising Playbook.
A/B Testing Best Practices for Nonprofit Emails
1. Only test one thing at a time.
A/B testing is not for comparing more than two variants. Keep it simple for the clearest results. Introducing multiple things takes you off the path from A/B testing into multivariate testing, which is a different beast. If you try to test a subject line and your messaging at the same time, for example, it will be difficult to know what supporter were responding to.
Test one thing on one outcome. “I’m testing one thing, because I want to be sure that the success of that test is due to that factor,” says Rachel. “I don’t want to introduce those confounding variables.”
2. Randomization is crucial
It’s very important that your subset is randomized. “If you don’t have a tool that executes randomization…you’re introducing selection bias. You’re not going to have results that you can draw a solid conclusion from” says Rachel.
3. Know why you’re testing
Why are you conducting the test? What one, specific thing are you trying to figure out? You need a hypothesis in order to conduct an experiment, and reasons for choosing the variables you’re comparing.
Again, start with listening. If you’ve noticed that messages that focus on one area of your work have performed well in the past, see what happens when you reference it in the subject line. If you’re seeing that people open your emails, but don’t click on anything, see what happens if you make your emails much shorter.
4. Look for statistical significance
You’ll need a large enough sample size to know that the results aren’t just chance. “If I’m sending Version A out to five people, that might not be a great representation of my audience. You want to shoot for at least a hundred people ,” says Rachel.
5. Choose a reasonable time range
How long do you want to wait between the test email and the best send? Consider the day of the week and the amount of time the test email has been in inboxes.
6. Choose a success metric
How will you know people are engaging? Decide on the metric you’ll use to measure success. Is it email opens (which are a little trickier to decode these days), clicks, gifts made?
Keep Testing, Keep Listening
As you test, you’ll gain more and more insights about what drives engagement with your particular audience. That will enable you to make your content more relevant, building even more trust and interest. As you continue to refine your emails, you’ll get better and better at connecting with your audience.