Moving Beyond Buzzwords to Proving ROI with Adobe Mix Modeler

[Music] [Nils Engel] We have an action-packed agenda for you guys today. Thank you for getting up bright and early on a Wednesday morning at Summit. I know that's kind of rough except for the East Coasters. It's a lot easier for you guys.

Okay. Our agenda for today. We're going to talk about H&R Block's Road to Acquiring and Using Adobe Mix Modeler.

So the topics we're going to cover today is, number one, how H&R Block was doing measurement before they acquired Adobe Mix Modeler. Then we'll pop into a demo that shows how we bring data into Adobe Mix Modeler. From there, Kristen will talk about convincing the organization that AMM, Adobe Mix Modeler, is a must have. Then we'll pop into a demo around how you build models and what the output of that looks like. Then Kristen will cover within H&R Block what the change management looked like to acquire and use Adobe Mix Modeler. And then we'll pop into a scenario planning demo.

Before we do that, though, let's just quickly introduce ourselves. So my name is Nils Engel. I'm a Managing Principal Solutions Consultant, which is a mouthful to say every time. I've been with Adobe for about 19 years now and focusing on measurement the whole time based in New York. [Kristen Harris] And I'm Kristen Harris. I'm a Director of Marketing Data Strategy and Optimization at H&R Block. I have been there about five and a half years now, but only the last year in marketing. Before that, I was in our Data Science and Analytics team leading Enterprise Analytics. So a data science background came in really valuable for this AMM move.

So one thing I think everyone in this room would agree with Kristen is that taxes are hard. Yes. But being that it's tax season, just know that H&R Block can help make filing easier.

But, Nils, also measuring marketing performance is really hard. Yeah. Well, Adobe Mix Modeler can help there. We can help you with your media measurement, make it easier.

Let's talk about how we can help you make a return on your data. All right. So we're going to go into H&R Block's previous methodology for measuring our marketing performance.

So we had a media partner that helped us with our AMM modeling. So that was an external third-party partner that would come in and we would run it once a year. So if you think about taxes, they're an extremely seasonal business.

Most people know tax day is April 15 and you get your W-2 sometime in January. So a majority of our business occurs from January 1 through April 15.

And because we're only running this once a year, we would then look at the performance of a full season after that season is over. And how did we do that? Because it was a third-party, we would start manually collecting all of our different data sources at the end of tax season. We'd have SFTP of spreadsheets. We'd only do it once a year so no one was ever saving their code. There'd have to be slight adjustments to these file pulls and we'd pull about 50 spreadsheets, drop them and send them over. That would take us about two months just to get the data together.

On top of that, because we're pulling spreadsheets with manually written code, we would go back and forth with our third-party partner QAing the data because a file wouldn't have been sent all the way something along those lines.

And then after that, we felt good about the data, our third-party partner would take that and build the model. It was great and they would provide us the results via static PowerPoint displays in Excels. So we get the MROI curves, we get the model output, we get the channel performance and all of it is just in Excels and PowerPoints. And from there, we took it and did a crosswalk to Last Touch Attribution. I just said that we did our Media Mix Models once a year, but all of our season occurs from January to April. And so we had to create a crosswalk from something that we can measure continually in season, which for us was our Adobe Analytics Last Touch Attribution, and crosswalk that back to these MMM results to try and do some form of measurement and flighting so that we could do a plan for tax season.

Because how critical like that is to our spin plans, it would take us about two months to derive this crosswalk and how we're going to track in season.

And then finally in Excel every day throughout season, we're looking and making adjustments to our flighting, to our plans based on this crosswalk. But at the end of the day what you're seeing is Last Touch Attribution day in and day out and so we typically heavily skewed towards search because it's the data that you see.

So altogether to get MMM to work for us took six months every year and we'd only be running it once, we couldn't have different components of it throughout tax season. And there was a lot of accuracy limitations because things could change throughout tax season that we didn't see. It was really slow. There were lots of process inefficiencies and we missed opportunities to scale.

And also if we think about it, I just told you a lot of this is in PowerPoint and Excel. So how do you actually tell the story altogether of your media measurement? So we had these MMM results from partners that we'd go and share for the summer and do a big end of season readout in terms of performance. Internally, we were looking at LTA results, but we also view through attribution through things like Google's CM360, right? We had additional multi-touch looks that wouldn't have included our full business though, right? So that was everything through Google. We also had platform data, right? We have our walled garden data. So we are taking four or five different inputs, aggregating some of those through basic math, talking to our marketers, seeing how they felt, and across all four of those, we are creating our plans, activating the data and telling the story.

So I don't think this story is too unfamiliar for a lot of places but it wasn't efficient and it took a long time.

So let's talk through what AMM done to help us improve efficiency. So to kick it off we did an initial test of AMM using all of those spreadsheets that we sent to our third-party partner. We just did a manual load of that to see if we could recreate and see what results we saw in Adobe Mix Modeler. That's AMM, sorry.

And what we found is that we could do it. It was a really easy process. And so we had two years of data, we had taxes in '23, taxes in '24, and we felt good about the process in AMM. It was an easy lift, it wasn't huge, the model accuracy was good, we're getting similar results as we expect.

But then how do we make it so that that six-month process, these manual feeds aren't still holding us back? So the next step is automating the feeds. We have Adobe AEP, we're starting to send our media data into AEP and those daily nightly performance feeds from our media data are now being pushed to Adobe Mix Modeler. So when data is being updated continually, people are looking at it, right? So the QA process is happening day in and day out because people are seeing our media numbers now. We're looking at marketing performance. And then from there, you might need a data scientist to really set up your models, make sure you feel good about it, but then we can have our marketing analysts also look at model output. There's indicators for model drift. And so instead of having our highest technical resources watching this model, the UI enables us to have the marketing analysts that are working with our media partners day in and day out look and understand what's going on. We also have visualization tools, right? And so you're able to take things that were previously held in Excel and PowerPoint and distribute them via Adobe tools that people were used to using. We're a big Adobe Analytics shop at the time and so our senior leadership was looking at Adobe Analytics reports and now we can start feeding this data into platforms they're already going into, data that they are already used to seeing with words that they know how to filter themselves.

Also instead of doing Excel with all these manual plans and flighting, we can do ad hoc plan creation. Right? We can run scenarios in the tool. So you're getting out of Excel, you're getting out of manual processes, and then you are able to easily track in flight performance and do a daily review.

So like I said, that was the previous methodology and this is what changes also with AMM.

Instead of someone taking in their brain all of the knowledge from all of our different sources, we're able to apply a lot more attributes to AMM and utilize transfer learning which helps us understand those final MMM results. So one of the things we do in the past is we do tests, right? And we'd have lift from those tests, but we wouldn't be applying those into our MMM models. Or some external data factors, right? Taxes are highly driven by changes in regulation, like non-farm employment. There's external factors that impact tax. So all of those things now are also going into the model and we're having transfer learning instead of just relying on marketers' beliefs. We still are able to include those and make adjustments to the model, but really that transfer learning is generating something that we can believe in with our final MMM results, and then those results can then be seen in a platform that our leadership feels comfortable using through CJA.

Great. So Kristen talked about her old methodology of loading in Excel spreadsheets and a bunch of files and putting it to SFTP, very manual process. With Adobe Experience Platform, which is what Adobe Mix Modeler sits on top of, we provide a number of different ways to bring that data in a very automated way. So instead of having those manual feeds, it's just an automated process where that data just flows into the Adobe Experience Platform. Once that data is in Adobe Experience Platform, we then have the ability to what we call harmonize the data. And that's a way of coming up with a common nomenclature across all of those different data sources that you're using so that an impression from one data source represents an impression from another data source. You're joining that all together. And so what I'm going do here is quickly just show you in the interface what this looks like. I was going to do a live demo, but we've had internet problems in the past. So I'm just going to give you screenshots. But what we're looking at here is part of the data harmonization process within Adobe Mix Modeler. The first part or the first thing you do is you define the fields of data that you're going to be working on. This is going be the data that's going be brought into Adobe Mix Modeler. And you're just specifying what are those fields that you're going to be building your models off of it. And you map it to the data that's within Adobe Mix Modeler. You can see I have a bunch of fields named on the left here. And as I want to add new fields, I just click on the Add Field and define them. Once I've defined my fields within Adobe Mix Modeler, I then have the ability to define marketing touchpoints. These are going to be the touchpoints that are going to be used within the model. So you're pre-setting up all of the different marketing touchpoints. This might be affiliate impressions. This might be display impressions and clicks. All of that data gets defined as part of this harmonization process. As I have a need to add new marketing touchpoints, maybe new channels, I click on the Add Marketing TouchPoint. And I can define what those marketing touchpoints are based on that list of fields, those defined fields that I set up previously. First thing I would do with a new marketing touchpoint is give it a name, define what in the data represents those marketing touchpoints, what the touchpoint volume represents, in this case it's impressions and then what the touchpoint spend represents, if there is spend with that marketing touchpoint. In this case, it's cost. So I'm defining that based on those fields that I set up in the first step. Once I've defined my marketing touchpoints, I can then define my conversions. What are the goals that I'm trying to drive for as I'm building my model? So I can set up any number of touchpoint conversions within Adobe Mix Modeler. You can see I have a list of them here. If I want to add a new conversion goal, I can set that up. Click Add Conversion. Similar process is setting up a marketing touchpoint. I'm now setting up a conversion goal. In this case, I'm looking to drive point of sale orders. And I define within those fields that I defined previously what represents a point of sale order, what's the metric that represents a conversion in that data, and what's the metric that represents the revenue in that data.

Click Create. That's now been defined. So this setup process is very easy, as you can see within the interface. You're defining what's the fields of data, what are the marketing touchpoints, what are the conversions. Once that's been set up, you can go into our overview view here that allows us to understand, is the data there that I expect to be there? This is a data validation step. You can slice into any of the different data sources you have, the different channels to make sure that what you expect to be there is there. And then once you've done your data validation, you can start building models. Before we cover that, Kristen's going to talk about who needed to be convinced within H&R Block that Adobe Mix Modeler was the way to go. Yeah. So I would start with me, right? Because as a data scientist by background, there's a large part of me that's like, we'll just build our own models in Python, and it's going to be so much easier and so much better.

Having a tool to do this really reduced a lot of that lift and also a lot of your dependence on your data science team who could then go and build additional models that have additional ROI instead of continually babysitting what we had. So once I got on board, then it's how do you convince everybody else? And I would say, one of the great things about our whole Adobe journey is that it's been a partnership between IT and marketing. Marketing never ran at this alone because they couldn't, right? So I think we've heard a few horror stories where marketing were trying to implement this very big, very powerful platform. At H&R Block, we never had that. So our IT team has partnered with us every step of the way. We work together to groom the backlog. We do prioritization together. My IT partners are here today. We wouldn't be successful without them. And so I think for us, if you don't have that IT buy-in to get these feeds running continually, have them monitored in a true ETL process, make sure that they're hitting their SLAs, you're going to be missing a big part of the value of getting to run your models more frequently because you're going to be watching all the data accuracy all the time. But we're able to do that because of our very close partnership with IT. And so that's a big one. Make sure you have your IT partners. The other one for us is executive leadership. The initial purchase for us for AEP was led by our CIO and our CMO, so they have that executive leadership. But as more senior leadership is brought in, getting them to buy-in to the platform, getting them to understand the value of all the Adobe products so that we can continue to get ROI, continue to grow, continue to improve personalization. And then for the AMM piece, it's like, "Hey, you made all of this investment. Let's actually measure it," right? And so AMM was really like the cherry on top to convince them.

It was never like the initial driving factor, but to actually measure marketing has been a complaint of everyone, right? Marketing is hard to measure. This is going to give us a tool to do it more frequently. For us, we work with agency partners. We are measuring their performance with marketing measurement. And so a big thing is starting to have these discussions with our agency partners. What is AMM? How is it different from our old process? What are our expectations of them for using this output? And how do we integrate it into planning and flighting? And then our media team, right? We want to make sure that they feel comfortable with the new measurement process, that they are going into the tool and they feel good about reporting too. So for us, it's these four pillars are what are going to make AMM a success for us internally at H&R Block.

And another reason why this wasn't as hard to sell as you might think is because at H&R Block we're quickly losing signals for our old measurement framework. Right, so with cookie deprecation I said we'd used CM360 right there. We used to have a huge media tagging framework and we still have that across some of those channels, but at the same time, data privacy is critical, right? We want to share less data, we want to send less data out, we want to keep more things internal. And so being able to do full measurement end-to-end in house is also going to be a really big benefit for us. And not having to send any hashed email or anything like that for the measurement components.

And so, yeah, right now and I think one of the big things about what everyone's experiencing with cookie deprecation, on average, only 35% of media spend can be measured in on MTA. So we're just losing a lot of signals, and we want to solve for that. AMM gave us an option of how to.

And finally, I think people were really tired of hearing the same story out of marketing.

We talk a lot so much in season about Last Touch Attribution, like how paid search is performing, and then all of a sudden after we run AMM, we're giving them these huge channel summaries that all of a sudden say, "Hey, we probably spent a little too much in paid search." And so hearing all of that over and over again and it became like a cycle at Block where we talked all about LTA in season, then you get this big AMM read out to summarize the season.

It was just too fragmented. And it was hard for people to believe because it felt like the story was changing because if you don't understand a marketing measurement framework, it really is your story is changing.

So what MMM is allowing or what AMM is allowing us to do is in season tell the same story that we're going to give at the end of season with our summary views. So it's going to be building upon a story that we're telling them in January, February, March, April, with a final review in May as opposed to feeling like we're pulling the rug out from under them in terms of channel performance and switching things up. And so having that MTA with the MMM running them more continually is giving us a leg up in the storytelling and also improving our ability to optimize across channel in season.

So Kristen just talked a bunch about the methodology that Adobe Mix Modeler uses and the different types of models that are being used, the MTA, the MMM. Let's pop into AMM, Adobe Mix Modeler, and check out how we actually build a model within the interface. So for starters, we talked previously about how we get the data into Adobe Mix Modeler. Once that data is in there, we can start building these models. And one thing that's really powerful that Kristen just talked about is that they were working off of one model for a season, right? What we're talking about here within Adobe Mixed Modeler is the ability to, in a very ad hoc way, build any number of models that you want to based on different conversion goals, based on different marketing spends, different inputs into the models, until you get your model looking the way you want to. And so on the left hand rail is an example of a bunch of models that have been built. The ability to build models is as simple as clicking on Open Model Canvas within the interface. From there, you would define a name for your model. And then you start defining the inputs that will be used for that model. And this all comes back to that data harmonization that we did previously, where we defined what a marketing touchpoint is. We defined what a conversion is based on that harmonized data. And so the first thing we're going to do here is we're going to define what represents a conversion for this model. What are we trying to model against, right? What types of conversions? In this case, I'm going to choose total orders. But I can pick from any of those conversions that I set up as part of that harmonization process. Once I've defined that, I can then define which marketing touchpoints do I want to include to give credit to for those conversions. I've selected a bunch of them at the bottom here. But these are all of the marketing touchpoints that I defined, again, as part of that harmonization, that initial step.

Once I've defined those, I can then also define what I want to represent as my eligible data population. So do I want to run a model based on all of the data? Or do I only want to do it based on maybe a specific region, like North America? Or maybe I want to do it based on a specific brand? Any of the data points that have been brought in as part of that data harmonization process can be used to filter down what data is going to be included in the model. Once you've defined that, you can then define what factors you want to include in the model. And we allow you to bring in both external factors as well as internal factors. External factors would be things like the S&P 500, the consumer price index, employment rates, right? All of that stuff can be considered external factors. Internal factors would be things like employee headcount. Maybe you opened up a new branch. Could also be a promotion that you're having. All of that can be included within these internal and external factors. Once those have been defined, you can then also define whether or not you want the model to use spend share. Spend share is great for when you have sparsity in your data. So H&R Block is a great example where they have huge spikes and then no activity. So spend share can help with managing with those gaps in the data.

We can also turn on multi touch attribution. So the models within Adobe Mix Modeler can be purely MMM if we want, or we can include MTA, multi touch attribution, as well. If you have that event level data that says this person interacted with this campaign and then converted, we can bring that in and offer an MTA piece of it, which all ties back to that transfer learning capability that Kristen talked about earlier, where all these inputs that we're putting into the model are brought into the model. And then through the transfer learning, we're determining what inputs have the most impact on the model. Basically, it's making the modeling smarter. Another input is prior knowledge. So anything that you have that is a prior knowledge. So you may have done a lift test in a particular GL, and you may have some results from that. You may just have strong opinions from people within the organization. You can apply them as prior knowledge within here. And again, those are a piece of the data points that are put into the model that transfer learning will help to apply into the model and be used as part of the output.

Once you've defined all of those inputs, you can specify whether or not you want this to be a one-time run of your model. And then you can go and rerun it on any cadence. Or you can specify a schedule, all right? So do I want to schedule the training of the model and the scoring of the model? And if I do, what cadence do I want that to occur on? Lastly, we typically want our best practices is two years of historical data. Some clients have more. Some clients have less. So you can define what your historical data is that's going to be used to train the model on.

Once we've done that, we can run the model. The model takes a couple hours to run. And then you get an output of that data. The output looks something like this. So this is what we call our model insights view. And this allows us to start understanding the performance of the marketing that you included in that model. So if we start in the left hand rail, we can start understanding the contribution of your marketing above base. So baseline is basically what would have occurred if we turned all marketing off, right? So what conversions would we expect? And that's represented by that light blue bar within there. Above that is non-spend marketing. So what's the impact of my non-spend marketing? And then lastly, my spend marketing. So I can see the influence of those different types of marketing above base. It also shows you where there might be spikes in your data. I'm not sure if this is what your data would look like, Kristen, but-- Yes. I mean, if we were in the middle of tax season, we might have been able to show some H&R Block data. But yes, this is all fake.

Great. And so we also have a view into contribution by channel. So you can understand, for a particular time range that you're looking at, what's the incremental lift that you're getting based on each marketing channel that's being included in the model here. I can see what base is influencing. And then I can understand everything beyond that in that single view. I can also see my return on investment by marketing channel. So I can actually see which channels are driving the highest return on investment. For example, within here, I can see that Brand Bing is driving the highest return on investment, which then brings over the marginal response curve. To me, I think of this as the point of or the law of diminishing returns, right? Where can I spend to until I start to lose money, right? And so this marginal response curve shows you that. And then that star within there represents the marginal breakeven point, where for every dollar you're spending, you're starting to gain less than $1 back from that. And so that gives you that sweet spot from a marketing spend perspective. I can see that for my overall paid media, so all of my spend. But I can also drill into whatever marketing channel I want. So for example, if I were to look at the left rail again, Brand Bing is killing it, right? But I might drill into Brand Bing over here. And it might show that I can only spend a certain amount before I'm going to start losing on it, right? So a really nice way of being able to start understanding where your sweet spot is from a spend perspective.

We also have our multi-touch attribution view as well. And so this is giving us a view, if you've turned on the multi-touch attribution piece, a view into the incremental impact of the marketing that you're applying. And so we have a number of different models that we support within a multi-touch attribution. We have a number of weighted models. But the model that most clients are interested in is our algorithmic incremental view. This is going to show you the incremental lift that you're getting based on those marketing channels that you're spending in. I can see within this view here how many total conversions I have and then how many incremental conversions I had as a result of my marketing. I can see that trended over time. Total conversions, incremental conversions trended over time. I can break that down by channel to start understanding, again, what are those incremental lifts by channel within there? And this is where I can start getting a lot more granular, where I can drive right into the different campaigns within here as well. So I can see at a campaign level which ones are performing best. And that's really powerful when we think about the model may be telling you spend more on email, well, what specific email campaigns are performing. I can see that very easily within this view.

We also have a view into like a starter, player, closer view. This is allowing us to understand where in a journey a marketing channel is performing best. Is it best from an acquisition perspective? Or is it best from a closing perspective? And I can see that very easily within here at a channel level. I can also see my top converting paths. And our client uses this typically to understand what's that next best marketing touch to hit somebody with based on where they are in the journey.

Another powerful input-- Sorry, output of the model is our factors view. So those internal and external factors that you may have included in the model, we can now start understanding the influence that they have on driving a base or driving conversions. And that's powerful in being able to now start understanding, "Hey, as consumer price index goes up," for example, "I start to see an increase in my conversions based on this particular model here," right? But it allows you to now start understanding, as these things happen, how can we adjust for them? Now if you provide these types of results to anyone in your organization, anyone in the data science organization, they're going to want to make sure that the model is valid. And so we have a Diagnostics tab within here as well that allows you to start understanding, what did the model predict would happen? And then how did it compare to what actually happened? So how close is that model to actually predicting reality? As part of that, we see it trended, but we also provide a number of scores back. R-squared, MAPE, RMSE, these are things that are going be important to your data scientists within the organization.

Lastly, as you build your models, over time, they can start to drift. So basically, what's being predicted isn't quite matching up with what's actually happening. At that point, it's time to rescore. And so we just actually announced at Summit this year that we have a drift detection report within here. So if we start to see a gap between actuals and predicted, we'll show you an alert that basically says, "Hey, it's time to retrain the model because you're starting to drift on the model outputs." And that gives you a nice view into the error, hey-- Not the error, but the message as well as where that drift is being detected and how far that drift is against what's expected.

So based on all of this, I'd say, Kristen, this is probably close to as easy as filing your own taxes in H&R Block, huh? I mean, April 15 is coming up, everyone.

Let's talk about how do you actually make this transformation, right? So we have our nice green block. It's working. It's efficient. It's just not where we need to be. This new shiny option of beautiful visualizations, of retraining the model, like everything we just walked through. I think when you see a demo like this you're like obvious yes. But how do you actually make that change management? And for us a big thing that we're having to do with the move to AMM is tell everyone that all the numbers they've been seeing for the past five years, we've really built this narrative around our marketing story. We've used LTA. We've been teaching them how to look at CM360 data. We're going to restate all history. So we're basically telling them, "I'm changing your measurement framework. The numbers you're used to seeing are not going to be the numbers that you're seeing. We're going to restate history and then we're going start measuring that performance of restated history." Which is a scary process to take people down, but a necessary process. And for us, it's communicating that early, do you want better marketing measurement? Then we can't keep doing the same thing that we used to do.

We'll still have views into LTA, but our goal is for people to look at the new AMM results compared to prior year, because we're going to have prior year and then really understand and start judging performance. And so getting that message out that, "Hey, you know stuff and you're going to think these numbers are wrong because they're going to look a lot different than what you're used to seeing is going to be a big piece of this change management." And another piece is making sure that people understand, in the tool, you can go in and start seeing these channels for yourself, start understanding what is modeled measurement, right? If you're used to seeing LTA, how all of a sudden do we have all this additional stuff? So really teaching people why modeling is really necessary nowadays for marketing. How you're going to get better measurement through doing a data science model.

And so those are some of the components we're starting to talk through. And I would also say at least at H&R Block, there's a wide variety of marketing knowledge, right? Some people understand this totally. Other people are like, "What is a Media Mix Model?" And so trying to get that conversation across everyone that cares about how marketing is performing because marketing spends plenty of money in a place where we feel good about this before next tax season is our biggest journey this summer. How do we teach people these metrics? How do we teach people how to use them? How do we teach people how to see them? And how do we get them comfortable? And another big thing is we want people seeing marketing performance. Historically, marketing has set on its own a little bit in the reporting space. We were not in some of the same enterprise reports that other enterprise reporting was because of all the nuances in our measurement. But now we have Customer Journey Analytics where we can present a lot of this data. And what we want to do is add transparency to marketing. We want to start showing these numbers. And so for us, that's another big piece of change, but it is showing, like we're being transparent with our performance just like everybody else at the company. If you have ideas, we understand, but this is how we measure, this is how you see it. You too can understand and see marketing performance, and we're being as transparent as possible. So that's been another big piece of change management for us is the exposure of our marketing performance day in and day out.

Great. Thank you, Kristen. So as part of my last demo, I showed you how to build models. The next thing we want to do is we've found a model that makes sense. We like it. We like the results of it. We like the diagnostics of it. Now we need to do a scenario plan. So basically, we have a certain amount of money to spend. How should we allocate that across those different budgets? And what kind of return on investment can we expect? And so same type of interface for building plans within Adobe Mix Modeler, you can build as many plans as you want. You can build them on the Fly. You can see I have a bunch of them on the left hand rail. Creating a new plan is as easy as clicking on Create Plan. From there, we can define a name for that plan. So give it a name. Specify which model we want to build the plan off of. So you found a model you like. You would pick that model.

And then what's the budget? Your budgets could be an overall budget for a particular time period. Or you might have specific flights that you want to add within there. So you might want to say, "Hey, I have three different flights within my marketing initiative. And I have different spend amounts for each one of those. So I can define what the time periods are for those flights as well as what my spend amount looks like for each one of those." I'll get a view into what my spend looks like across those time periods. When I click on Next, it's going to actually go and start to build the scenario plan for me. But it asks me a question first. Do you want to build a plan based purely on just the data that you've provided? Or do you have some inputs, basically some constraints? You might have a commitment to a particular channel where you have to spend a certain amount of money. And the plan needs to know that, right? And so if I select that option, it then allows me to break it down by flight how much I want to-- What my commitments are by channel, right? So I can see within here I'm in a particular flight. I can specify within that flight what my commitments are for certain channels. For example, in this particular example, I have affiliates where I have to spend at least $300,000 and I have a max of 1.5 million. Now I might set a lower max. I can adjust those on the Fly. But these all become inputs into the plan. Once I've defined that, I can actually build the plan within there. And from there, I'll get a plan result. The plan result will basically define how much I should be spending in each of those different marketing channels. As part of the output, you have the ability to compare plans against each other. So for example, on the left handrail, I have a plan where I'm spending $486 million, and on the right hand rail, I have a plan that I'm spending $442 million. From here, I can see a very quick comparison to start understanding the differences between those plans. So I can very easily see that I'm spending less with a plan on the right hand side than on the left hand side. And that green shows me what I'm spending close to $40 million more on that plan on the left. I can see for each one of those plans what the breakdown of spend should look like for each of the different channels. And I can see what the forecasted return on investment looks like for each of those different plans. So although I'm spending more on the plan on the left, you can see I actually get a higher ROI from the plan on the right. And I can see that very easily within here. If I scroll down, I can see what my forecasted revenue is for that particular plan. Now I can see the plan on the left is actually giving me a higher total revenue, or forecasted revenue. So this is where there is a balancing act. What's the goal of the organization? Are you trying to be as efficient with your spending as possible? If so, you might want to go with a plan on the right. If you're trying to drive a specific revenue goal, you might be okay with having less of a return on investment, but you're hitting that goal, you'd go with a plan on the left, right? And so these are the types of decisions you'd make off of that. Now once you have these plans in flight, we also provide you with a plan performance view. So you can start understanding the performance of that plan in flight. You can check it at any time period that you want. So for example, in this case, I can see I've spent $400 million from a budgeting perspective, but I'm actually under pacing. I've spent less than the plan is actually telling me I should be spending. And as a result of that, my revenue is lower than the plan had forecasted. I can adjust that spend and try to get back up to what the plan had adjusted or had recommended to try to get to that goal that the plan was shooting for.

So hopefully, what you're seeing here is ease of being able to get data into Adobe Mix Modeler, the ability to build models on the Fly as you need to, and then being able to build these plans and then be able to measure in flight the performance of those different flight plans. Yeah. And just like at H&R Block, we'll occasionally have requests for incremental marketing asks, right? So we want a little bit more money for marketing spend. And right now that's a very manual process of what would that incremental increase in spend drive and for our case, returns? And that's a very long process for us right now and very manual. So being able to put in that increase into these plans and then flight them out through the tool saves a lot of time for us.

So now that we've done this, there's just a few lookouts I want learn from me. And so these are just like a couple of things that I wanted to leave you with as you consider setting up your own AMM instance and kicking this off.

So one is get your agency partners bought in. Talk to them early, talk to them often. They often help you run your marketing. If you don't have an agency, obviously, exclude this point. But for us, being transparent with them, tell them where you want to go, how you want to measure, and get their buy-in to make the changes that the model says. Because what you don't want to do is be rolling out a whole new measurement framework to your enterprise and be fighting with your agency about what those numbers say at the same time. So you want that buy-in early.

The next one for us is get the executive support. When you're changing something this large in your organization, make sure your executives understand what's happening and that they're supportive of those changes. For us, the big two were IT and marketing. I would say it's really hard to run AEP by yourself, so make sure you have IT to support you and make sure your marketing executive is going to be able to talk to the changes and the benefits of these new tools.

And finally, make sure you have the money for it. It's not a free platform. There are resources to stand it up, resources to make sure you're telling the right story, and to be successful, like plan to have a budget. Don't plan to do it for free, or with your current head count. There is a learning curve getting into the tool. You have to learn it. I mean, Nils makes it look super easy and it is once you are in it a few times, but getting that stood up with the right partner and having a budget to really move quickly and show value would be something I'd say. You don't have to go back every year, right? Have a big plan. Get it budgeted.

And guys, March 19. Do you know how many days that means until your taxes are due? All right. Just making sure. So taxes are obviously better at H&R Block than anywhere else you could file. But I would say also AMM makes marketing measurement and planning better than most of other tools we've looked at and seen. Maybe I'll say that for you guys too. - Right. - Yeah. That's the goal. Great. That concludes our session. Hopefully, you guys saw value in this.

And if you like what you saw here, we have four more sessions today on Adobe Mix Modeler. So these might be of interest to you. We're going to go really deep into how Adobe uses Adobe Mix Modeler and how we actually built the product based on Adobe's marketing. And that's the one at 02:30 to 03:30, Marketing Mix Modeling at Adobe, Learned to Predict the Future Like We Did. So if you want to go deep, that's where you go.

Thanks, everyone. We're here to answer questions if anybody has any questions. - All right. - Yeah. Yeah.

Thank you. [Music]

In-Person On-Demand Session

Moving Beyond Buzzwords to Proving ROI with Adobe Mix Modeler - S413

Sign in
ON DEMAND

Closed captions in English can be accessed in the video player.

Share this page

Speakers

  • Nils Engel

    Nils Engel

    Managing Principal, Expert Solution Consultant, Adobe

  • Kristen Harris

    Kristen Harris

    Director of Marketing Data Strategy and Optimization, H&R Block

Featured Products

Session Resources

No resources available for this session

About the Session

Now more than ever, brands must show how their marketing operations are profit centers rather than cost centers. However, proving ROI for marketing initiatives is challenging due to issues like signal loss, data fragmentation, and delays in accessing results. To address the pressure to show business value, brands need a modern approach to mixed granularity modeling and attribution. Enter Adobe Mix Modeler! Mix Modeler helps brands strategically plan their marketing mix, optimize campaign spending in real time, and make data-driven recommendations with confidence — all thanks to AI-powered technology that returns accurate results faster.

Key takeaways:

  • Learn how to leverage mixed granularity data to enhance unified MMM and MTA models unique to Adobe
  • Discover how the Mix Modeler AI/ML technology accelerates time to value and empowers brands to plan more strategically for future campaigns using model outputs and business insights

Industry: Advertising/Publishing, Automotive, Financial Services , Media, Entertainment, and Communications, Retail, Telecommunications, Travel, Hospitality, and Dining

Technical Level: General Audience

Track: Customer Acquisition

Presentation Style: Tips and Tricks

Audience: Advertiser, Digital Marketer, Marketing Executive, Data Scientist, Marketing Analyst, Business Decision Maker

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.


By accessing resources linked on this page ("Session Resources"), you agree that 1. Resources are Sample Files per our Terms of Use and 2. you will use Session Resources solely as directed by the applicable speaker.

New release

Agentic AI at Adobe

Give your teams the productivity partner and always-on insights they need to deliver true personalization at scale with Adobe Experience Platform Agent Orchestrator.