Direct Mail Testing Is Worth the Time and Resources

by Elena Veatch

two envelopes side by side, one with a check mark and one with an x

Measure Your Impact with Direct Mail Testing

Direct mail testing can strengthen your voter contact efforts in the long run by revealing what works and what doesn’t through concrete data. If you want to glean insights from your program, setting up an experiment is worth the time and resources, no matter the size or scope of your campaign. The worst-case scenario from testing is that your results are inconclusive—that’s a pretty great outcome when the alternative to testing is learning nothing at all. So, let’s talk about why you should test, what your options are, and what you should consider in building a data-driven campaign.

Why should I invest in direct mail testing?
There’s a long list of reasons to make room in your political or advocacy budget for direct mail testing and other experiments. On a warm-and-fuzzy level, learning is an end worth pursuing in itself. If you’re in the business of politics and policy, you (hopefully) care about people. Deepening your understanding of how they think and why they will or won’t support your cause will make you a better advocate for the change you want to see in our world—even if you don’t like the answers you get from an experiment.

Learning for the sake of learning aside, being curious makes you a better political practitioner. How can you expect to change hearts and minds if you’re only committed to building programs informed by your preconceptions? Gathering concrete data by gauging responses to your mail and digital programs will shape your strategy in the long run and make you a more effective arbiter of what works and what doesn’t in persuading or mobilizing voters.

Direct mail testing is a great way to definitively determine the success of your program. Being able to show donors the return on their investment is invaluable in paving the way for funding for future projects. Telling a story about your program is important, even if the numbers gleaned by an experiment aren’t pretty. At the end of the day, understanding (or at least having a theory about) why something worked or failed makes for stronger future programs. 

What are my options?
There are a couple of options when it comes to setting up direct mail testing. While larger campaigns will generally have the luxury of flexibility and resources, smaller campaigns can set up effective, meaningful tests on a limited budget as well. Working with an academic to set up tests can yield more thorough results, but don’t let perfect be the enemy of good—there’s plenty you can do in-house and on a smaller scale to glean insights (even if they’re anecdotal).

  • Experiment-Informed Programs (EIPs): You can test your program among a small slice of your universe, measuring its success before rolling it out to the full audience. This option can’t be an afterthought—it requires planning and time. If Election Day is ten days away, you likely won’t be able to layer this in.
  • Control Group Experiments: You can designate a portion of your universe as a control group that does NOT receive communications. This way, you can measure the difference in responses/behaviors between those contacted by your campaign (the treatment group) and those who were not (the control group) to see if your communications moved the needle. While you need a decent sample size to generate statistically significant results (think in the realm of 40,000 households), that doesn’t mean a smaller test isn’t worth conducting. You can learn from results (interpreting them with a grain of salt), even if you can’t establish a statistically ideal confidence interval for them.
  • A/B Tests: Another option that can be far simpler to set up in-house (particular if you’re doing digital work, from paid ads to emails) is to set up an A/B test. This requires randomly splitting your list of targets into two groups, so that one receives Treatment A (e.g. creative with a GOTV message tied to an environmental issue) and the other receives Treatment B (e.g. creative with a standard social pressure GOTV message). Whichever group responds best can give you a sense of the most effective approach to take.

What else should I consider?
So, assuming you’re sold on testing, where should you start and what else should you consider?

  • Start early. The earlier you start thinking about where and how testing fits into your direct mail and digital programs, the more insightful your experiment will be. Make testing a part of the planning agenda from the outset, in every conversation about budgets, goals, targets, and tactics.
  • Have a clear goal. You should have a clear sense of what question you’re looking to answer with your experiment.  Do you want to figure out the best way to get conservative women to change their views on abortion? Are you looking to engage communities of color in Michigan to increase turnout in a particular election? Is your goal to find the email subject line that leads to the highest number of donations to your campaign?
  • Get buy-in. Make sure your whole team is on board with the goal of your test, and get that buy-in early so you can move into the planning and executing phase.
  • Strive for actionable results. Testing lets you explore a lot of fascinating facets of policy, politics, and psychology, but make sure you’re investing in information that will be actionable for your campaign or organization. Are you seeking information that will be helpful to you in the short- or long-term? If not, is there a way to reframe your goal?
  • Overestimate your testing budget. While there are plenty of options for building effective experiments on a smaller scale, it’s always better to aim high with your testing budget than to fall short. Be aware of the sample size you need to achieve the quality of results you’re anticipating. If a survey is involved, know that this will increase your costs.
  • There’s nothing wrong with baby steps. Don’t be afraid to start small if you’re new to the world of direct mail testing. Any experiment is better than no experiment—just be realistic about the deliverables and the potential limitations of the insights you glean. 
  • Share learnings. If you’re open to it, collaborating with other organizations and campaigns can be a great way to pool resources to get to the bottom of questions you all want to answer.

Do you have more questions about testing? Our team is always happy to chat.