How T-Mobile Personalizes Customer Experiences with AI Decisioning

[Music] [Anya Edstrom] Thanks for allowing us to be the time in which we are between you and your drink. We appreciate the time. We appreciate you being here and excited to talk to you. [Deepti Anthony] Awesome. So first, we will flash the roadmap disclosure statement as we always do with product talks. So not a guarantee of any future feature functionality, but in the spirit of transparency, want to be able to show you what we're working on and what's coming.

So welcome, everyone. We're so excited to talk to you about Decisioning, what it is, why it matters, and how T-Mobile is making it real, and some stuff we're working on as well. I'm Deepti-- Oh, I'm Deepti Anthony. I'm a Senior Product Manager focused on AJO, specifically focused on Decisioning. And I'm Anya Edstrom. I'm at T-Mobile. I lead the digital store. I do product strategy as well as business management, all things digital. Awesome. So showing our agenda, want to first talk about T-Mobile's Digital Vision, then go into Decisioning, an overview of what it is and why it matters, and then talk about some key focus areas, so AI ranking, Experimentation, and Decisioning Expansion as well as what's next.

And then I'll hand it over to Anya to talk about T-Mobile, how we got here. Yeah. So you will hear the mantra from every employee to the CEO that T-Mobile is a data-informed, AI-enabled, and digital-first company. We aren't just a telco. We are connecting people with their world, and we're doing so in a hyper-personalized way with smarter recommendations. At least we're aspiring to do our very best, and we're still on that journey. We're customizing content in near real-time with hyper personalization. We're taking efficiency and innovation and using that automation to redirect that energy to higher strategic projects so that when we do reduce time to market or how we can operationally be efficient, we can spend it elsewhere. And that reduces our need for more headcount, but we can still do a lot more. We also are optimizing the customer journey. Every touch point matters. So we want to make sure that no matter where you are in the journey with us, we're identifying those friction points and making sure that we're addressing them along the way. And lastly, we want to reduce the cognitive load for customers. We want to have smarter recommendations so they're not having to hunt for what they want from us, but we're actually offering something to them before they even know they need it.

And what I love about this presentation is it's really the intersection of where T-Mobile and Adobe are making it realer and T-Life, which is our latest app, we released it last year around February, so it just had its anniversary. Very exciting because what we did is we took 20 different apps and we consolidated that into one experience. What the challenge also therein lies is that that's a lot of content to try to fit in one place, and especially a place like the homepage, which I'm going to get into in a bit. But it's really designed to keep that fluid experience with the customer no matter whether they're starting their journey online or they're going into a store. And in fact, one of the things that I'm most excited about, we just launched a new sales motion as of February in stores where if you're a customer looking to upgrade, you walk into that store and the retail rep actually has you pull out your phone, has you open the T-Life app or download it if you haven't already, logs in, and walks you through the customer experience of actually doing an upgrade online. And this is a brand new experience for our customers. It's a brand new experience in sales motion for our in store reps to go and navigate, but we're really excited because it really truly embodies our digital first vision, and it's our first step to at least taking that one customer journey and making it a very digital-enabled experience. It's the link between our sales and our marketing channels.

And now I'm going to pass it to Deepti to give us an overview on Decisioning. Awesome. Thanks, Anya. So what is Decisioning? It's choosing from an inventory of content to present to an end user to optimize a goal. For example, what's the best phone offer for my end user? It can provide so many benefits, namely unlocking personalization. You can tailor your offer content to each individual user. You can drive efficiency at scale. You can automate the process of selection, saving time and resources for businesses, and you can drive market growth. By leveraging Decisioning effectively, you can result in increased brand loyalty, which can result in in turn, increased LTV for customers.

So I want to take a step back and talk about the actual process of Decisioning. So there's two key components, filtering and ranking. But first, let's just set the context. So let's say that there is a user named Mark Aaron, and we want to personalize his experience by choosing three offers to put onto his home page. So how do we do that? First, let's start with our offer inventory. That's our source of truth, our full library of content that we have, all of our offers stored. Then from that group, we might not necessarily want to say any three offers from this library is what I want. Maybe we have a pre-specified group within that inventory, which we call a collection of offers that we would then say, "I want to pick from this specific group of offers to go into Decisioning." So that's where we think about our Pre-Calculated Constraints. Then we might have some Eligibility Constraints. So maybe there's an offer that's specific for gold status members or subscription members, and let's say that Mark is neither of those things. So those offers would be taken out of the running to be sent to Mark as well. We might also have Placement Constraints or Location Constraints where offers are configured for specific locations, that maybe are not the home page. So those would also be taken out. There might also be Suppression or Capping Constraints. And this is where fatigue management comes in where we might think about, "Okay, if offer A, I don't want an end user to see offer A more than three times a week." And let's say Mark's already seen offer A three times this week. That offer would be taken out of the running. So similarly, those types of Capping Constraints would be met there. So after all of those layers of filtering, you go down from, in this example, 500 offers down to 30 offers. So what happens now? Now we go into the ranking component. So now we apply some logic for our arbitration for selecting the top three offers in this example for that web placement. In this example here, we're using AI ranking, but we have other methods of ranking, which I'll talk about later. But in this example here, we'll use AI ranking to pick the top propensity offers, the top three that will then get shown on that web page. Within this example, we're looking at a web page, but I just want to call out that delivery of those offers can happen across any channel.

I want to just pause and see from a quick show of hands, does this make sense to everyone? Is there any areas that maybe I can go through it again if I don't see enough hands up? So how are we feeling about this? Okay. That was more than I expected. Okay. So we'll move forward.

Got a room full of experts. Yeah, exactly. So leveraging offer Decisioning correctly entails a few key building blocks that then boil down into a few key questions. Who is the end user or audience? What might we know about that end user or audience that we can use to leverage and tailor our content to that user? For example here, maybe we know this person's a subscription member. Maybe we know certain interests that they have. What offer or content is best for this interaction? How are we thinking about our logic for selection, our ranking, and our eligibility that we might want to add here? When or where is the best way to have this interaction? What's the best channel for this interaction? What's the best time for this interaction? And why? What's the business rationale for this engagement overall? Decisioning addresses a lot of these business challenges by having a few of these key components. We are able to leverage our real-time customer profiles. We have a holistic understanding of each profile that we can then use to tailor our content and make sure we're personalizing and making our content relevant. We have a centralized offer library, so we have a source of truth for all of our content to be housed in one place. We leverage a decisioning engine with those key components of filtering and ranking that allows arbitration to be sourced centrally. And then finally, we have cross-channel integration. So after we have selected the offer or offers that we want to send to the end user, we can apply that across any channel.

So now that we have an understanding of the high level definition and use cases of Decisioning, I want to focus in a little bit more on AI Model Ranking. But before I do that, I want to first talk about all of the methods of Ranking that exist with Decisioning. So first and most straightforward, we have Rule Ranking. This is, for those familiar, called our priority scores where you apply a manual score to the offer at offer authoring. So you can apply any value here when you're creating the offer. And basically, how this works is after you go through all of that filtration that I talked about, then when you go into ranking, it would literally look at whatever offers that are left, what are the top ranking offers, and that would be the one that's selected, offer or offers, depending on your use case.

So this one's a little bit more straightforward. The second layer is formula ranking. So this is where you can apply and you can think about dynamically adjusting the weights of those offers that you've defined at authoring time, those priorities under specific conditions. So you can think about things like if my end user lives in Paris and I have a few offers that have the Eiffel Tower in the background, maybe I want to boost the ranking of those offers specifically. So I can leverage things like profile attributes in that example, offer attributes also in that example, or the context data, so things that are coming in at the time of the request. For example, if I have some offers that are focused on warm weather products and the temperatures above 70 degrees where my user lives, maybe I might want to boost those offers as well so we can think about things that are coming in at the time in real-time. And that's where Formula comes in. Again, this is adjusting those priorities under those specific conditions. And then lastly, we have Intelligent Ranking, and that's where our AI models come in.

So both of our AI models that we have in house with Decisioning, rank offers based on their propensity to achieve the success metric that you define. They both employ goal-based optimization so they learn over time to help drive your business outcomes. We have auto-optimization, which is focused on maximizing the overall return for your entire offer group. So that's when you think about maximizing your overall winner when you have an audience and you just want to find the best offer for that entire group. And we also have personalized optimization, which maximizes the return per profile. So that's more of that one-to-one type of matching.

We have a few recent developments in the space. First, Custom Metrics, it gives you the ability to define your custom optimization metric that you want the AI models to perform against. And then second, we have Lift Measurement. So it's the way to visually see the difference between the explorer and the exploit traffic of a model against your optimization metric to understand the performance of those models. So both of those are available today for our personalized optimization model for CJA customers.

Something else that we're working on is our AI formula builder. So what this allows us to do is very similar to the ranking formulas that I talked about earlier where you can think about profile attributes, offer attributes, and the context, you can manipulate all of those different things in the same way to think about adjusting your weighting of the offer. But instead of just using priorities, you can now also use your AI score output. So what that means is you can assign your ranking score of any/all offers of interest using the AI model output, the priority score, a propensity score that could be derived externally that could live on the profile, or any static value that you could also place in here. So that's all available in one place, and that will be coming out in April, so very soon.

And with that, I will turn it over to Anya to talk about how T-Mobile is making this real. Yeah. Excited about all the things are coming, and to be completely transparent, we're not using it all. We're using some of the things. We're using some of them better than others, and we're still on a very long learning journey with Decisioning. And we also have different ways that we use it in app, and we use it on web. So there's nuances that we're always contending with. But today, I'm here to talk about the T-Life Homepage Optimization. We implemented a combo of the ranking, including formula and intelligent ranking, and we think about content. And when I was talking earlier about all the different teams that we have to support, you've got your prospect or your acquisition team wanting to see that best deal, the best iPhone deal. You've got your base customer team wanting to see that churn offer. You've got your big brand moment that wants to see their shiny thing at the top of the page. And we've got to think about how our information architecture on the home page has to interact with that. We use experimentation to pressure test and make sure that it's working as hard as we want it to. We also are trying to figure out if I'm going to put a bunch of the content in the top of the page and do the optimization on the individual modules versus start to think about how a one to end experience could look like where content actually fluidly moves up and down based on personal and customer interaction, I think that that's a really unique way to start to decide whether or not you want to keep content categories or you want to be able to have that flexibility for more one to one. So we have a lot of work to do here, but in this particular case, we really wanted to drive higher engagement through those personalized experiences, and we found that it worked. We saw a lift in engagement. We saw double digit growth in order volume, but we learned a lot along the way. And so that's what I wanted to spend a little time talking about today. And so the delicate balance around that business prioritization or rule ranking versus having more of the ability for the model to do the work for you, it's a hard one to strike because at the end of the day, you have a lot of leaders that want control to see their thing at the top of the page. I've got my team here giving me smiles because we know how that can be. And yet we want to really let the model do what it's supposed to and actually use those behavioral signals to give that content the right place and priority as the model allows instead of the actual putting it there because of decision or because of the actual rule logic. So the other lesson that we really learned is that it's only as good as your measurement system. We were very used to looking at orders and conversion to decide what worked and what didn't, and a couple things came out. Now we've got 20 apps in one, so there's a lot of content that we didn't really anticipate and needed to absorb into our measurement framework. T-Mobile Tuesdays is measured very differently than our iPhone offer measured differently than a benefit or even a service related message or a network outage for that matter. So how do we absorb and take that into account and then also input that so that the optimization works the way we need it to? And some of that does go back to the IA or the information architecture where maybe you need a specific place for alerts, and you don't want to necessarily put that against the iPhone offers. So we really have created some mechanisms to help us understand what that customer behavior is. Some of it goes back to actually letting the rules and the logic do the work, but some of it's also just doing a lot of UX testing and making sure that we watch the customers really interact with the app and with the experience, especially on the home page that has to do a lot of work. The next thing we did, it was a mindset shift for our leaders who wanted control. We had been doing very manual prioritization, especially, on the homepage of our website. I want to see this offer in this spot, and I want to see that offer in that spot, and I don't want anything different. And especially, if we're gapped to quarter numbers, we want to adjust accordingly. And we get that all the time with T-Mobile as I'm sure you're all familiar with that type of interaction as well. And so how do you get your leaders comfortable with this whole concept of letting the machine do the work? And it's hard. It's been a lot of education. It's been a lot of thinking through. I learned a new term, change activation. Thank you, Katie and Accenture. Typically, I call this change management, but I like the idea of change activation because it's a little more inclusive of bringing everyone together rather than a tops-down approach. But that is so important to create a lot of visibility with how you're measuring and how it's performing and having sometimes the ability to obviously A/B test against a non decisioning experience would be great. We didn't have that, so we had to build that trust over time. And hopefully, we will get to a point where we have a little bit more flexibility on our A/B testing as my testers here would attest. But at last, we do not yet. Number four was around scaling content. The machine is only going to be as good as how much content you put in, and it can't decision against what doesn't exist. So for us, it was really about how do we make sure we think through how much content and what content and even the differentiation of content. Because if I've got an upgrade offer and an add-a-line offer that literally could-- You could just say iPhone on US, and it applies to both. But it doesn't really distinguish any level of personalization or speak to that customer's need state. And so for us, it's how do we expand and accelerate content to really think through different variations, whether that's an image or a message or all sorts of things that actually help us navigate what good looks like or how many. I was even just talking to my VP this morning about, putting it into the machine learning on our top three cards on the homepage, is it 50? Is it 200? Is it 1,000? And we don't know yet. We're still trying to figure that out and trying to understand how much variance it has to have in order for the machine to even pick up different signals. So those are some of the things I blurred a little bit of the lines between key learnings and what's next, but we definitely want to accelerate content and see what the right threshold is. And then we also want to enhance those insights and think about macro learnings. Because right now, I feel like we're very focused on what did that piece of content or that type of content do. And we really want to get to a point where we're looking longer term over quarters or even years on what content does in its seasonal moments and how does that change and shift. So how do we pull ourselves up to the 10,000-foot view is really important.

Now I'm going to pass it back to Deepti to talk about Experimentation. Awesome.

So there's a few different areas where Experimentation and Decisioning can play together. The first is Content Experimentation where you can look at various aspects of the message or the offer itself and test against those two things. So for example, you could look at copy, layout, images, colors of a given offer or also test on separate offers themselves. In this example, we're looking at pieces of the offer that we're adjusting, looks like colors, some items here, to directly test in a content fashion. The next layer is rather than just Decisioning-- Sorry, experimenting on content, you can experiment on the ranking methods themselves. So you could ask a question like-- "Does my formula perform better than my AI model? Do either of those perform better than my priority ranking method?" You can use Decisioning and Experimentation together to test which selection strategy or ranking method does better. You can now take it a step even beyond that. Rather than looking at an individual experience, you can look at a layer of experiences or a set of experiences and directly experiment on how do these sets of experiences or journey paths perform against each other.

So those are their separate levels of Experimentation. And now I get to talk about how to make it real. So we've got two use cases that I'll go through today, in this particular decisioning, experimentation. And the first is at the basic level. It's just content variance, right? I just wanted to see how many pieces of content I could put into the machine and see how it worked. And in this case, we did over 50 variance of a travel offer.

And the goal was to understand who it resonated with and why increased customer engagement, obviously, but in this case, we really focused on travel.

And so what did we learn? We were surprised. Travel offers are consistently in our top five highest performing offers. And I'm not just talking about in the placement that we have it, which is actually pretty low on our homepage for the T-Life. It's performing really well across all of our different content types. Even iPhone offers and benefit offers, like Starlink that we just launched a few months ago. So lots of really great learnings there, and now we're trying to say, "Okay. Well, do we move that up the page? Do we start putting travel offers into the top three slots and see if it gets even more engagement, which is why testing is so fun." We really think that it's an opportunity for us to not just test things like travel, but across our whole content portfolio. So what did we learn? We learned that cross-channel, we could validate results in one channel and carry those over to another. We also adjusted the experiment, actually, before it even started with channel specific capabilities. They're different for us between web and app. So how do we structure the test so that we can modify it, get the right learnings, but not necessarily say that, "Oh, we can't do it the same in both spots, so we're not going to do it at all." So making those adjustments ahead of time to make sure that we were getting the right information on the output. And then last, we overcame channel limitations with strategic adjustments. So if we weren't able to do something in a certain way, how could the experiment pivot and not stick rigidly to a test plan? Instead of removing the channel altogether, how did we found creative ways to make it work? And so just being flexible was a really important part of the process. And then where do we go from here? Expand creative testing, obviously, we did it with travel offers. We hit a lot of other content categories that we can move into. We also want to explore other journeys and other channels that we can expand into. So we have other own channels like Email, SMS, Push. And I get to talk about that next, but those are other channels that we could also think about more of an end-to-end journey for the customer and meet them where they are. And then lastly, taking propensity models. So making more predictive analytics a part of our actual experimentation is a next step forward too.

So I go right into the next one. So now this is one that I'm super excited about because this was our really first foray into looking across multiple channels and deepening a program that already existed. We ended up taking a tiger team and having them really think about how we could do this fast, how we could do this effectively. And we had an abandonment email program, like many of you probably do as well, as well as web content that would show up. But our web would optimize over here and our email would optimize over there, and they didn't really talk to each other. So this was an exciting distinction is not only bringing the analytics and the cross journey where you've got now in app cards and messaging. You've got web, you've got email as well as you've got your push notifications that are all talking to each other. And we also expanded type of content that we included. So instead of just cart abandonment alone, we did browse abandonment, shop abandonment, and then ultimately, if you didn't check out. And so that really gave us opportunities to talk to customers in different ways and in different parts of their journey so that we could customize the messaging accordingly.

And obviously, most of it was to increase engagement, but ultimately, we wanted to also see upgrades and add-a-lines increase, which was the primary use case that we put to this task.

And results were great. We saw lifts up to 63% and a double increase in our weekly order volume. So everyone is very happy in T-Mobile land and wants to see it grow to other journeys and other content types. But what did we learn? Similar to my previous learnings, we leveraged each channel's unique value. So in places like web and email, we used rich content and rich imagery to really sell the story, whereas a push notification, lot shorter footprint of what we can tell customers. So we have to really urge urgency. And then we followed the customer where they went. So when you think about the targeting logic, we made sure that it was fluid across all of those channels, and it moved with the customer profile. And then letting customer signals drive that adaptive messaging. If we learn something from the customer, we actually changed the messaging in the next touch point that they got so that it could be adaptive to how customers interacted with all of those channels. And also, if they were able to get more or we started to see more customers interact with say, push notification, we sent more traffic there. So that's really where Decisioning allowed for that flexibility and scaling in certain experiences that we're doing better than others. And then what's next? So refining and testing content is always a part of it. We are a continuously optimizing engine. We never-- Well, we don't never. But we often, as much as we can, in digital try to not set and forget so that we can eke every little experience, and make it a little better. And then we also want to incorporate more channels. One of the things that we considered at the beginning was adding some of our paid media, and we found it to be a little too complicated, complex. It added an additional partner team to go and navigate. So for us, that was to really prioritize speed to market and quicker learnings. We decided to pull in and only have the four channels that we did with the idea that we would expand to more channels at a later date. And then, lastly, expanding to other journeys. We sell insurance. We obviously have travel offers. We have ways that we want to go and deepen customer relationships with our benefits. So there's a lot of opportunity to extend all of the great learnings and experiences we had with this Abandonment Journey Optimization. And with that, I'm going to pass it on to talk about AJOD, which is the new cool thing. Thanks, Anya. So yes, that third focus area that we want to talk about is AJO Decisioning, AJOD. I'm sure there's been a few different acronyms of this in some conversations, but this is our next generation of Decisioning within AJO. It was conceived for two key purposes. One, to lay the groundwork of new content objects beyond just offers and moving into things like products, content, calls to action, things like that. Two, to unify the workflow between Decisioning and AJO at large. So specifically, looking at that selection and separating selection from rendering and unifying rendering with campaign authoring and journey authoring.

So what can you expect to see today with AJOD? One, you can find schema-based item catalog management. What does that mean? So for those familiar with offer decisioning, you might have to manually create new attributes with every new offer that you're creating. And with every new offer, you have to recreate those attributes again. So rather than having that, we have a centralized way of managing the schema of an offer, and that will reflect in all new offers that you create. We have robust collection rules. So rather than having to manually tag or assign a group of offers that can be brought into a collection for Decisioning, you can use all of the custom attributes and standard attributes that already exist on the offer and apply any flexible logic to tie those things together. So you can use the information that's already there rather than having that manual step. We have these new constructs called a decision policy and selection strategy within AJOD. The takeaway here is that a selection strategy, which is a collection, a ranking method, and any sort of eligibility criteria you might want to add is location agnostic. Meaning you can create this reusable component, save it in an inventory, and then pull it in to use against any location that you create rather than, again, for those who are familiar with offer decisioning, it's more tied to a specific location, and you have to rebuild that every time. So the selection strategy is new. It's the ability to keep things reusable and to allow you to have more flexibility with where you place this Decisioning logic. We have our code-based content authoring workflow. That's our first channel that we're debuting AJOD in, and we'll have some more thoughts on that to come within the session. And we have experimentation within Decisioning. So I know we talked about that a little bit earlier, but that ability to layer in Decisioning, whether it's the offer level or the ranking method level, and be able to test those against one another.

And with that, I'll hand it over to Anya to talk about how AJOD is playing a role at T-Mobile.

So Subscription Marketplace Automation. This is a pain point for a lot of my operators here in the room.

We have a lot of ways that we bring content to customers, and in this particular case, subscription marketplace is slightly different for every one of our customers. And with whether you're on a certain rate plan or you have certain attributes on your account, you're eligible for different things. So this was a way that we could take dynamic merchandising and make that real with AJOD using the AEM-based content, but also then layering on customer profiles and making unique experiences. And in this case, we powered five different campaigns that served those different customer profiles. We grouped them five ways. And what that would have normally done is, we would have had to manually cater and merchandise to each of those five different experiences, and this allowed us to dynamically merchandise the content that was eligible, giving us both opportunity to drive engagement with things that they might not otherwise known about. And so it's more of an education awareness for what they're eligible for, and it also gave us an opportunity to upsell when they were maybe on a non-paid version of a subscription. So really, really a lot of great opportunities to broaden the portfolio of content that we put in front of customers outside of just your device that most people recognize and think of carriers for.

And we saw engagement increase. We saw great customer retention and stickiness from the activity. They stayed longer. They found more value in the benefits and their subscriptions that they had access to. And then what did we learn along the way? This centralized approach broke down silos. We had multiple locations that we talk about benefits, and to be fair, we probably still have some work to do. I keep looking at Gwen. We have some work to do to help even bring those down further. But what this does is allow us both on web and app. So two of the locations have these experiences be dynamic. And we created a forum for all of our partners that do have a hand in creating the different portfolio of benefits and offers we have to have rules and governance and ways that they can display content and have a bit more education on Decisioning at large. I already talked about this, but we consolidated all those experiences into one that increased our ability to have more time to focus on other things. We love that because the less we can do mundane tasks, the more we can do strategic opportunities. And so that's a big win for the team. And then we really had to prioritize data hygiene because all of these experiences were powered by our CDP. And so making sure that that was exactly the way we needed it to be, and it was pulling in the right information so those experiences were legitly for their eligibility. It was important to get that right. So if anything, data is king and is very powerful tool, so make sure that you're paying a lot of attention to it. And then what's next? We want to test the personalization level because our five profiles. Right? Or is it 10? Or is it 20? Is it one to one? So there's a lot of opportunity to think about expansion here and how-- What is the right ROI for the level of personalization that we want to get to? We also want to scale this type of dynamic content to more experiences so that we can really deepen that customer relationship with getting the right information to the customers based on them. Because I think sometimes we get into these situations where we treat the customer as all of these folks in this genre are eligible for an upgrade. But we might be featuring content at what we call a band level, which is the account level, not necessarily unique to everyone's line. And so there's ways that we can deepen that sophistication in different experiences so that we can really customize and tailor the message so that it resonates with them, and it makes them feel like we're talking to them as opposed to talking to just any customer. And lastly, additional channels. So web and app are where this lives now, but we could see expansion of this in more of that full end-to-end journey and meeting the customer where they are, whether we want to talk to them through email or even taking the signals of how they interact with an email or an SMS or a push and feed that back into what we end up showing them on these pages. So lots of opportunity to continue to test and optimize.

And now what's coming next? Tell us, Deepti. So what is coming next in Decisioning? So I know we talked about code-based experience being our first channel that AJOD is supported in. We're working on building in additional channel support to get to that parity with offer decisioning, starting with email channel coming soon, followed by push, web, etcetera. We are investing heavily in our AI models. So I talked about a few of the enhancements that we had recently. We're also working on general performance improvements, etcetera. We're working on Decisioning on journeys and journey paths, and I'll talk about a little bit of what that looks like. And lastly, we want to expand our content catalogs, moving beyond offers into products, calls to action, etcetera.

So what are we ultimately working towards? When we think about an interaction, there are several questions to consider, which you may remember me saying from the beginning. Who is the right person to engage with? What's the right content? Is it an offer? Is it a product? Is it something else? What's the right delivery method or channel for this interaction? When? What's the right time for this interaction and why? What's the business rationale for this type of engagement? We want Decisioning to optimize the who, what, where, and when as long as you know your why.

So what does that mean exactly? We want to bring Decisioning beyond just offers across all the way to journeys so we can optimize for the next best insert experience here at scale for any interaction.

So what does that mean specifically? I talked about journey paths a little bit earlier, and I think I talked about that a bit in the experimentation method. So we might think about what are these different paths and how can we test against them, one another, to understand the best path to move a user down. But we can also layer in Decisioning here. So we can automate the selection of any one of these paths and customize that for each individual user. So we can layer in when we think about nodes using actions, wait times. We can put offers in any of these nodes. We have a lot of flexibility here where we can layer in these experiences.

We also want to go a step, I guess, a step higher up from that. So rather than looking at the paths themselves on the Journey canvas, we want to be able to arbitrate between journeys themselves. So if we had a constraint, for example, if I said I only wanted a user to enter one journey per week, and let's say that user was eligible for several journeys, how do I make the decision of which journey that user should go down? So right now, we have priorities as the way of arbitrating between journeys, but we want to bring in these additional Decisioning components, such as formulas and AI ranking, as a way of intelligently arbitrating between these types of journeys.

So when we think about the entire spectrum of optimization across AJO, we really want to infuse intelligence at every inflection point within the customers' journey. So thinking about journeys, which we just talked about, bringing that down a level into journey paths, so a selection or grouping of experiences. Going a step below that into channel optimization, so automating the selection of a channel for a given user. Going a step below that, thinking about messaging, what's the best message for my end user? And then going even more granular to our focus of our discussion offers, what's the best offer in a given interaction? We want Decisioning to be able to flow through from the very top to the very bottom.

So with that, I want to do a quick recap, and I'll let Anya talk about some key takeaways. So bringing it home a bit, these are some of the macro learnings because we talked about a lot of use cases. We use Decisioning in a whole lot of ways, and these are the top five themes that really came through.

Automation drives better experiences and efficiency. Not rocket science, but it's truth. We really found it to be true where when we took our hands off the wheel and we were allowing the machine to do the learning, it gave us the opportunity to increase engagement, increase orders, and really flow through to the outcomes that we were looking for, but then also give our time back to our hands on keyboard and our strategist to think about different things to do and different things we can go and experiment and expand on. So that was a huge learning across a lot of these different takeaways. And then finding that balance between the business priority and letting the machine and the customer behavior dictate what goes where. It's a continuous journey for us at T-Mobile. We have a lot of leaders who like to see their thing at the top of the homepage, and we still struggle with getting through, breaking through on how and when that should matter. We have AI Decisioning or we have the homepage as an example, thinking through ways where if it's Apple new product introduction in the fall, we darn well better put that Apple iPhone at the top of the page, and we know that because that's an OEM commitment. But we also have moments and times where below that experience, we can talk about things that are more additive and let the Decisioning actually power the experience. The third is really success requires amazing measurement frameworks. We really need to think through what success looks like, and that can be dictated by many different content types. And so finding a measurement framework that's flexible enough to embody some of the things that aren't necessarily related to order strictly, my team focuses on sales and activations when it comes to the digital experience, whether it's web or app. But we also have to think about those big brand moments and Magenta Status or T-Mobile Tuesday and things that definitely don't have as tangible of a sales goal attached, but are really about creating that customer stickiness and thinking about us as more than just their carrier and more about connecting them to their whole world. And Decisioning is only as good as the amount of content and the amount of testing that you do. Right? We're even in our nascent journey of what all we can do with Decisioning. We've been using Target on our web for years and years and years, but this is new product, new opportunity, new ways of thinking, and new ways of presenting content. So we're really excited to figure out how to do this better and with more so that we can have a lot of that variation and understand better how different even the messaging needs to be on a single offer type in order to really get those learnings and the behavior signals to actually make the machine learn and present different things. And then last, again, new favorite terminology, change activation, critical. You need ways to train, to educate, to really get and drive that adoption, ways to prove the value of the things that we're doing. And so this is also something that T-Mobile doesn't always do a great job at investing in, but it's so, so very important. And so my last word to you all is please invest in change activation because in order to get everyone on board and to really be able to drive the level of engagement and adoption for something like AJOD, you need to create that journey for even your internal employees and your stakeholders and leaders to really get on board as well. So with that, Deepti's going to talk about what's next for you. So awesome. I know we said a lot of things in the last hour. So what do we do with all of this information? So for those of you who are familiar with Decisioning in AJO, please explore our model capabilities. Try out an experiment and try to layer in Decisioning and see how that goes. Try out AJO Decisioning to try to understand the functionality of our next generation of Decisioning and be able to have a voice in our future product roadmap and give us live feedback.

For those who have not yet explored Decisioning, please peruse our experience link documentation to really understand how Decisioning can help drive your workflows and help you unlock personalization, drive efficiency, and future-proof your marketing. And with that, I have to plug the survey. So please, submit the survey for a chance to win one of these session prizes or the daily grand prize. And we will be taking all questions after the session, so to give everyone a chance to go to the mixer. But thank you so much, everyone. - Thank you. - Thank you.

[Music]

In-Person On-Demand Session

How T-Mobile Personalizes Customer Experiences with AI Decisioning - S527

Sign in
ON DEMAND

Closed captions can be accessed in the video player.

Share this page

Speakers

Featured Products

Session Resources

No resources available for this session

About the Session

T-Mobile built the first and largest nationwide 5G network by embracing its mantra as the “un-carrier.” Being the un-carrier has become synonymous with 100% customer commitment. That commitment extends into ensuring that interactions and experiences in any channel are personalized, relevant, and exceptional. Anya Edstrom, T-Mobile’s director of Digital Product Strategy & Marketing, and Deepti Anthony, product manager for Adobe Journey Optimizer, discuss T-Mobile’s digital shift to customer-centricity and the role of next-best decisioning.

Key takeaways: 

  • How Adobe is developing omnichannel decisioning in Journey Optimizer through AI and business ranking
  • How T-Mobile is leveraging Adobe Experience Cloud products, like Adobe Target and Journey Optimizer, to experiment and personalize at scale across channels

Technical Level: General Audience

Track: Customer Journey Management

Presentation Style: Case/Use Study

Audience: Campaign Manager, Developer, Digital Analyst, Digital Marketer, Marketing Executive, Product Manager, Marketing Practitioner, Marketing Analyst, Marketing Operations , Business Decision Maker, Data Practitioner, Marketing Technologist

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.


By accessing resources linked on this page ("Session Resources"), you agree that 1. Resources are Sample Files per our Terms of Use and 2. you will use Session Resources solely as directed by the applicable speaker.

New release

Agentic AI at Adobe

Give your teams the productivity partner and always-on insights they need to deliver true personalization at scale with Adobe Experience Platform Agent Orchestrator.