Develop a testing strategy that supports your business goals
When I work with our marketing clients as an Adobe Professional Services consultant, I find they are often excited that so many of our Adobe applications make it easy to execute A/B testing and even multivariate testing. But they often ask for guidance on what they should test — or, more importantly, why to test. What I tell them is to align testing with your business initiatives so that they’re testing those things that will most effectively move the needle in measurable ways.
I first advise our Adobe clients to tackle testing strategy by progressing from goal > problem > cause > solution > tactic. MECLABS Researcher Lauren Pitchford said it best when she said, “We don’t test ideas. We test solutions to problems.”
1. Start with a goal, not a tactic
It may sound obvious, but the first step in knowing what to test is to understand what problem you’re trying to solve and gather data about. Don’t jump straight to “I need to A/B test this form.” Instead, think in terms of what business objective you’re puzzling out. For instance, it could be growing the number of new leads or improving engagement with existing customers. Those objectives might have different problems to test, and that means they’ll test very differently. When I work with Adobe clients on testing, we spend time talking through their KPIs to give us starting points for testing.
2. Map out one problem and all of its possible causes
Before you can develop a successful test solution, you need to understand the problem. What is getting between you and your goal? There may be several possible problems, so focus on one at a time. Additional problems give you additional testing paths to follow in the future. For example, if new lead generation is the problem you’re trying to solve, a “free trial” form on your web page might be just one of the culprits. Another might be the online advertising driving to that form, or it might be an offline tactic that isn’t about forms at all. Those are very different problems to pursue, and each one may have multiple testable solutions.
3. Brainstorm multiple solutions per cause
Again, let’s take a look at your free trial form. You studied your web data and learned that only 30% of the people who come to your website end up on the page the form is on. Only 50% of them fill it out. There could be two different causes of the problem. One cause is they potentially can’t find the form, and the other is that they aren’t compelled to fill it out. Let’s put a pin in the “can’t find it” cause (that’s another path we can pursue later) and focus on the lack of form fills. There might be multiple issues here — too many fields on the form, visitors may not understand that the trial is free, or they may not see the form due to its layout on the page. Each of them may require tactically different tests to find the best solution.
4. Once you’ve got solutions, you can think tactically
Pick one of the testable solutions to start. In this instance, we’ve decided to tackle that it isn’t obvious that the free trial is actually free when a visitor is presented with the free trial form. Even here, there are multiple solutions to test. Do you need to change the form itself to make it more obvious that the trial is free, or do you need to change the content around the form to make it more obvious? In this case, we’ll select making changes to the form. Tactically, you now have (at least) two possible tests — changing the button text to “Register for your free trial” (and you probably even want to test multiple variations of that button text). Or you might add the words “free trial” to the title of the form itself.
The result is a robust testing path targeting a very specific business goal. Notice how even the problems, causes, and solutions we brainstormed but didn’t immediately pursue can give us additional paths to follow up on later.
5. Every goal doesn’t have to be lofty
In fact, if you’re just beginning an organizational testing strategy, your early goals should logically focus on reaching and engaging your audience before converting and closing them. I compare it to an onion. Your outside layers may be focused on inbox placement or web page visits, and then, as you’re able to successfully reach your audiences, you begin to peel away the next layers — what engages them, what converts them, and then what retains them. (And then we can turn around and start testing earlier tactics again because what proves effective can change over time.)
6. Focus on the right metrics
If you’re testing engagement for a call to action in an email, an open metric is not the most important metric here — it’s click-through. If you’re testing conversion, you’re interested in how the form was filled out, but probably not as interested as time spent on a page. Think through what you’re testing. If you’re doing a subject line test for an email, it’s not fair for the subject line to carry the weight of the click-through. The subject line’s primary goal is to get you to look at the email and drive open rates. Maybe the subject line is great, but the email body is not compelling. Keep your metrics tightly focused on just what you’re testing, and don’t make it carry more weight than it should.
7. Don’t stop at testing — govern and enable
I’ve helped clients create governance for strategizing and prioritizing testing paths on a regular basis, templates for centralized documentation of test results, and processes for communicating the results throughout their organization. Make sure your marketers know where to find the latest test result data and pick proven results for their next campaign. For example, if someone has already tested that subject lines for an event invitation are most effective when they include the location and date of the event, that marketer is already one step ahead in achieving an effective event invitation by incorporating that learning into the email. Don’t let your testing results become a data warehouse that nobody visits.
8. Be a testing evangelist in your organization
Don’t let your testing just be a “thing you do” that nobody knows about. Sometimes our stakeholders don’t want to take the time to test and want to jump straight to results (or want to try something that you’ve already recently proven through testing will not achieve the desired result). Be a proactive testing evangelist using the data you are gathering and the results you have achieved to show its value and its importance. For instance, if through a rigorous series of tests on subject lines, send times, and from names, you’ve increased the inbox placement of your emails by an average of 18% — which, let’s say, means an additional 26,000 eyeballs per send in your organization — that’s news you want people to know about. Testing results don’t have to be interesting “just” for the marketers. Your executives appreciate that increased brand exposure, engagement, conversion, and retention translate to increased revenue. Tell the story of how the testing you’re doing relates directly to those results. In Adobe Professional Services, we’ve often helped our clients develop communication strategies to internalize the results of their efforts, whether testing strategy or other initiatives.
Ultimately, while Adobe’s applications make doing the testing easy, the biggest effort in effective marketing testing happens outside the software. Testing in a vacuum or testing just to test isn’t an effective way to drive actual business results. By establishing a well-governed, strategy-driven process for testing, leveraging the results to improve, and sharing those results within the organization, you can make an impact and know that you’re making data-driven decisions that drive success.
Looking to decide on the right testing strategy? Set up time with an Adobe expert to establish your strategy-driven testing process. For more, explore how Adobe Professional Services can help.