[Music] [Sunish Verma] All right, good afternoon, everyone. Welcome to Adobe Summit. Welcome to our session, Deploy, Run, and Operate Adobe Experience Platform. I just want to start with thank you. Thank you for being at Summit. Thank you for doing what you do with our products, building the amazing customer experiences for your customers and organizations. So with that, before I dive in and talk about what we are presenting today, I want to start with a question.
My question is, what does it take to run a marathon? I don't know if there are any runners or bikers in the room. Oh, there you go. We've got a few runners and people who bike in the room.
If you have to plan for a marathon, you start asking yourself some basic questions. Like what's my milestone for marathon? I'm not going to start running all 26, 30 miles day one. I have to do a buildup on that. What is my running technique? How I'm going to run, how I'm going to pace myself for consistency.
What would be my diet? What is it that I'm going to eat which will fuel me to keep running and give me the impetus that I need to run this marathon? What is my daily practice or a recurring practice? How do I keep practicing so that I can build the power, the stamina to run that? The other important part is the technology, the gadgets, the tools, the running shoes that you need. Those are the standard questions that you start asking if you were to run a marathon. What is it that I'm trying to convey with that question? Three things, right? You build your goals of what do you want to accomplish, you think of an execution path and then you have to track, measure, and see how you're progressing towards your goals. Well, for this afternoon, our conversation is very similar about how do you deploy the platforms for an enterprise organization or for a small, medium business organization to drive the business value and what does it take to do that transformation? So to talk about that, the way we have broken out today's conversation is I'm going to talk about the business value. What's the business purpose of doing that? What is the right architecture which supports that? And how does the Adobe product fit in in that narrative of driving the business value? Then we'll talk about the implementation. My peer Kevin Foster is here, who's going to talk about the devil is always in the detail. So what are those implementation best practices and the right guardrails that you should keep in mind to have the right architecture implemented? Then we'll talk about our own case study of Adobe. How do we do this for our own business? Joel Huff, who's in the room, will talk about that case study with you. And then we'll wrap it up with any open questions that you have and give you some summaries and feedback. And obviously always happy to do a follow-up if you're not able to cover any questions right now.
To begin with, I'm Sunish. I've been in the industry for about 19 years. I always feel every time I tell 19 years, it gives you a clue about my age, but that's okay. And I've done a lot of work in pre-sales, consulting, engineering, implementation.
I've been working on Experience Cloud since 2016 and one of the founding architects on Adobe Experience Platform when we started deploying and executing it for our customers since 2019.
So what I want to cover about in my conversation with you all today is what's the business problem we are solving when we say we have to deploy these platforms or services and do it at an enterprise level? Where does Adobe fit in in solving that problem? And then I'll walk you through some of the case studies and the use cases that we have successfully launched for quite a few enterprise customers.
To fast-track us, I want to first define the business problem. What does the scale truly mean for enterprises and what's the definition through the lens of a consumer and organization? To be able to scale with large enterprise platforms, it's about building the personalized experiences for the customers across multiple channels to provide connected experiences which you can measure, which drive impact. And when you're able to do that with being better, faster, and smarter, which means you have the right people, process, and technology, that is what it brings at home.
Thank you. How does Adobe product fit into that? The way we look at the Experience Cloud and how the product fits in, not to go into all the details, but we look at this personalization as a scale as a LEGO block of three problems. How do you build content at scale and how are you able to channelize that content across the different engagement points for your consumers? How do you bring these large enterprise data, whether it's an enterprise use case or a marketing use case, and put it in a way which is consumable, actionable, and gives you the right insight at speed and scale? How do you engage with your consumers when it comes to the connected journeys? How do you build a cross-channel activation? How do you reach your customers at the stage they are with you from an engagement perspective with the brand? That's how we look at Adobe from our products. I'll not go into the details of each of the products. I'll do it through the lens of case study, and you'll hear a lot from Kevin and Joel as they talk about it as well.
Now the business problem makes sense, but what we at Adobe have always believed in is building the right consumer experiences. How are they connected? So what you see here is building a connected ecosystem where giving the relevant information, a personalized experience to your consumer at the stage they are with you and with the brand. For example, I'm maybe on the acquisition stage. I'm on a registration or member onboarding stage or I might be at a conversion stage. So it just depends on what stage they are in and you should meet them and build a connected ecosystem. So regardless of the channel they're engaging with, it's a connected experience for them.
Well, that's the customer part, but what about the organization? From an organization perspective, there are three LEGO blocks which drive that, which is people, having the right set of skills, the people to drive that, the right processes, what are the right processes organizationally, whether it's the marketing, the marketing ops, the products, the engineering organizations, the run and operate teams, whether it's the architecture boards, the program boards, how do you all intertwine together? And the technology. What is the technology that you have which will continue to modernize the capabilities to meet the industry demands? And how do you have the best-in-class products which will give you and build these experiences for you? Now if the business problem makes sense, the customer experience and organization makes sense, I want to walk you through a template not limited to this on how I have experienced it with a lot of our enterprise brands and how do they solve this problem.
It all starts with a business objective. What is the business problem that you're trying to solve to be able to scale our tools and technologies to meet the business needs and demands? So to quote a few examples, and again, this can vary by each brand, it's about building and delivering a differentiated experience across channels.
How do you nurture and acquire new customers? That is one of the other objectives, goals that we have seen across the organizations. How do you drive member engagement? Member engagement maybe in terms of building the affinity, the loyalty with the brand and whatnot. A lot of times we also get this question to say business objectives are not only business goal driven, but there's a technology component to it, which is how do I consolidate my products that I have in my organization? How do I bring it better and make them more efficient from a technology landscape? To be able to meet these business objectives, there has to be an execution strategy. My example of running a marathon, what are the milestones? What's your path to execution? We look at it a few ways. One is to build a strong foundation so that it has a lot of components of reusability and giving you services which can be shared across different teams, different channels. So laying the right foundation with bringing the right data, bringing the right content which would become as your base layer which is what you can use to improvise on different channels.
The next one is streamlining the content, the journeys. How are you going to publish the right content on the right channel? How are you going to build the cross-channel activation? That is where we call the streamlining.
Innovation, which we very strongly believe in, is to keep getting better in our products by building the right sets of capabilities and tools and keep modernizing the features and functions that meet the demand of the challenges of the current state of our industry. And then enable it. It has to be a self-serving tool regardless of the persona who's going to use it. And just to quickly talk about the products like Adobe Experience Platform is what lays the foundation with data. Streamlining the content and the journeys is where Adobe Journey Optimizer, Customer Journey Analytics and Real-Time CDP play a strong role. When we talk about innovation, you've heard so much about AI throughout this last two and a half days. AI Assistant, one of our top initiatives to enable and help the marketing personas or engineering personas to get you the right information was a big initiative, which we continue to amplify. Building the use case playbooks to drive the acceleration for the brand so that you can deploy these products much faster. And then Content Accelerator, really, really exciting feature where in Adobe Journey Optimizer as a content marketer, you can use the power of Firefly services through a generative prompt to generate these images and the content variations. Enabling our products are always API first. We strongly believe in making it better for marketer. We never call it done. We keep iterating and that's how we approach our product strategy. Now if all this makes sense, it is important to measure the success. And the best way to look at the success is to define the KPIs, whether it's quantitative, which is where the revenue comes, qualitative, which is where the organizational efficacy comes. The quantitative is more about how's the engagement, what is the conversion. You can have many layer of KPIs based on the personas and organizational efficacy is more about how many campaigns are you running? Are they going fast? Are they going efficient? Are you taking lesser time to run those campaigns? Now what I want to talk about is a case study where we spent some time when one of our healthcare provider companies who was under a similar transformation.
What were their business goals? And I'll tease a little bit about use cases because use cases are what brings to life how your consumers are going to engage with your brands. So at a high level, the business objectives were then to drive engagement, to drive retention, decommission the legacy system, how to build a modern platform, how to measure the success, how to build a test and learn strategy. And some of the use cases were around member onboarding, conversion engagement. I'm sure they sound familiar to all of you in this room or maybe overlapping with a lot more additional details.
The way this engagement was approached across the enterprise with various SI partners and whatnot to drive success was the consolidation of multiple campaigns, which is truly laying the foundation. When you run these old legacy systems, you build a lot of technical debt. And when you are modernizing the technology, you don't want to carry that over, but build a framework to modernize the technology, reduce the technical debt and build a strong foundation.
We launched various strategies in terms of defining the right use case framework so that you can prioritize what is the right consumer journey. What do I mean by that? Let me take a tactical example for you all. So in a legacy state, a lot of times when we work with brands, they have a way to do batch and blast communication. Create an audience, send the email. Create an audience, send the email. Have the same templates or modify them. It's not a connected experience, it's more coordinated, needs a lot of effort to drive that. We changed that approach to build a consumer journey. To give you an example, for member onboarding, when you onboard a member, there are organizations who might be creating audiences and keep sending those emails and notifications or personalizations. When they become a member, it's a follow-up for something else and whatnot, but they're all fractured. The way to build this is to think about a consumer journey to say, "I'm going to send the communication number one. The engagement of my communication number one is going to drive the next action that I'm going to take with that consumer." That is what truly building a journey means, and that is where the consolidation comes.
The next important part is channels and measurement. I've strongly believed being in the industry for the longest time that a lot of times we venture certain initiatives without a success or a way to measure the success in mind. And the best way to put that is to put it in your campaign or business strategy to say how I'm going to test and learn, which means how do I show the incremental lift of what I'm doing? How is it better and smarter than what we were doing previously? So that is where the test and learn strategy comes into play so that when you're launching these journeys and campaigns, you have a way to measure them. And then about run and operate. I think it's truly, truly important to understand when these system go in production, when these are production grade systems, just thinking out, you have marketing strategy, marketing ops, product teams who unpack these marketing strategies to define the capabilities. The product teams work with the engineering teams to build these capabilities and to ship it over back to marketing ops to start executing. How are you going to channelize those processes? What happens if for one of those use cases, some of the systems come down? How are going to respond back to that? It's all software, it's all human beings, things will break, but what is your strategy to overcome and recover those capabilities? That's where building the right processes, having the right pods, having the right guardrails is truly important. As we approached this, some of the wins was we consolidated 1,000 campaigns from a legacy system to less than 100 AJO journeys, which brought 80% operational efficiency for them in the organization, and it improved the customer engagement by 3 times.
Now that's truly what it means when you say, "I have to scale, but do it in a way which is pragmatic, business, use case driven, mapped to the right architecture, right people, process, and technology with success KPIs defined and measuring them in an iterative manner." And when things don't look the way it is, you can make changes to the strategy.
Now I want to go a little deep and walk you through where Adobe product fits in. And I'll take a simple retail customer experience and how these products intertwine together.
So here it is, meet Sarah. She's a loyal registered member for one of the brands. She likes to browse about some of the products like a running shoe, big on the channels like email and app. Great, that's the problem. Now that's the consumer persona and what we are trying to do is to give them the best experience so that they drive engagement and conversion.
The Adobe Experience Platform is the one which powers the collection of bringing this data through web, through app, through offline stores of what the individual is doing. Create the profile which will give you the information about attributes, the events, the segment membership they belong to, and do it at speed and scale.
The next part to that is about Real-Time CDP, which is how do you take these signals and activate it to different channels. That's where CDP comes in to create those audiences and activate across different channels. If you have to build a cross-channel journey, which is orchestration driven and business driven, it's where Adobe Journey Optimizer fits in to listen to the signals like, hey, a consumer added a product into the cart, did not purchase it, let's send them a communication to drive with the best offer. And what brings it home is having a tool like a Customer Journey Analytics where you can build these reportings and insights.
So that's the technology part, that's how the Adobe products fit in.
People, process, and technology are the enablers to this use case. Having the right set of people in terms of the engineering, the marketing ops, the reporting analytics, having the right processes and having the right technology, I'll not go into the details of it, but building those right integrations is truly what you need here.
Now I'll quickly talk about the reference architecture. If those use cases make sense, the way we have stood up this product is four logical categorizations, data onboarding, data democratization, profiles and audiences, activation and insights. Data onboarding is about how do you stream the data, how do you batch data. We have the cordless connectors. When you bring the data into platform, the way we democratize this in the simple big data architecture world, it's a cold storage and a hot storage. Hot storage is where you create the audiences fast, faster speed, faster response time. The cold storage is where you have the more query services, the analytical use cases. The third one is the audiences and activation, which is how do you create these audiences? How do you get those insights? How does tool help you understand better about your consumers? That's the third layer. And the fourth one is where we have these application services like Journey Optimizer, Real-Time CDP, Customer Journey Analytics. And when I was giving you an example of building the foundation, what you see here in one and two is really what is laying the foundation with the right data to power these use cases on omnichannel, cross channel reporting and insights. That's the block number four that we have across the products.
Now to talk about the best implementation practices and how do our customers bring this to life, what is it that you should be watching out for, I'm going to invite on my stage my peer, Kevin Foster. He and I partner in the field all the time. And I can tell you if there's one person who you want to review your data model, he's the one. So with that, I'll pass it on to Kevin.
[Kevin Foster] All right. See if I turn this thing on. Okay, great.
So I'm going to talk about best practices for implementation and three areas that if you want to focus and get something right, these are the top three. In my experience, 80% of the problems that someone might encounter are going to be in these buckets.
You won't get it perfect. There are other aspects of the product you're going to use, but these are the top three.
Okay, I'll confess, a little bit more than 30 years? A little bit, wee bit.
Past six, going on seven has been with AEP. I actually came to Adobe to work on AEP.
I don't touch code. I don't even know where to keep the product code. I live exclusively out in the world.
Been fun for me coming here because I keep bumping into people and go, "Kevin, I've been in a meeting with you. We've been on a phone together. We've been in a Teams meeting." I have so many web meeting tools, and my email list is fairly long. So if any of you know me, not always good with faces, but I get there, do come up and say hi after this is all over. I'd like to check in with you to see how things are going.
Okay, data modeling, number one, because everything starts with a data model.
Let's talk about schemas. If you use AEP a lot, bear with me, it might be new to some of you, the starting point of bringing data to AEP is a schema.
I've got a database background. You create a table, you get the definition of the data and a place to put the data in one go. We don't do that. The definition of your data is part of a schema. It is the schema. Where we put the data, we call it a dataset, call it table if you like, is separate from that. Starts with a schema. Now you have some choices to make.
First and foremost is what we call a class. What is the type of data this is? Is it current knowledge of a person? My first name, Kevin.
That's my profile. My email address that I've given, that's part of my profile. It's a class cult profile, XDM individual profile. What about my behaviors? What if I go to the website and I'm clicking? What if I'm browsing? What if I put something in my shopping cart? That's a very different kind of data. It's time series data. It's an event. It occurred. It happened, and it's immutable. You can't go back and change your mind. You can't change past. So you can change somebody's birthday. Oh, we had it wrong. Someone can change their last name. You can update that. You pick the right class for the right type of data.
Third type is reference data, and this is special because this, by definition, is not people specific.
It's a table of ZIP codes and demographics of them. It's their product catalog and things like that. So you have three building blocks.
When you decide you want to bring data into AEP, exactly three. Step outside of that and AEP doesn't understand your data. Stick to that pattern and you don't have to write any code because we wrote all the code that you're ever going to need and it can read your data when it recognizes the format. So what about that first name field? You could define custom, you could go into schema and just start throwing definitions in, we don't want you to because we actually sat down and figured out what is a good looking name, first name, oh, my slides got away from me, combo. What is a good definition of a person's address, and how do we model a home address via a work address? What is a valid format for an email, and how do we make sure we're not loading garbage into the system or a hatched email for the same thing? We have building blocks that you can learn and build your schemas from called a field group because it's literally a group of fields that have a relationship to each other. They may be some are industry specific. Browse them. Now in that same healthcare customer, Sunish mentioned, when I visited with them, they said...
"Some of our people that we have profiles on are incarcerated. Do you have a field group for whether or not someone's in prison?" Got me. No, we don't. Don't think we'll get that either, but we do let you create your own. So best practice, look at what we offer you as building blocks, select the right thing, and then carefully, after a thorough review, create the net new. Everybody has a little custom. When I first started doing this, I didn't use any of these field groups. I'm like, "Nah, let's go create everything from scratch." Not a best practice. I had to do significant refactoring as I learned the value of someone's invested the time and effort to make these things for me that I can just leverage.
Mistakes in your schema design are the most expensive mistakes you can make because once you've used them and they lock down, you can't edit them. You can add to them, they are extensible, you can grow schemas. Best practice, we do it all the time, but if you change your mind and say, "Oh, don't want that, I want it this way instead," it gets rough and at the extreme, you will have to create a brand new schema. You will have to drop all the datasets where you ingested data. You will have to recreate all of your data ingestion. These are expensive mistakes. Don't live in fear. It's going to happen to you once.
Don't make it a habit. And best practices mean the least amount of rework and the least amount of refactoring. Schemas are extensible. Don't design for 100 years from now. Don't even design for next year. Design for what you know. If you don't have data to load, don't create a schema for it. Sorry. Something's possessed.
Work as you go, right? Because a schema is invalid until you put data in it and reviewed the format, you don't know yet. So to create a schema that you might use six months from now, by the time you go to use it, you don't even know what you built. It may not be the same person.
Go as you go. Three to four months window, don't design beyond that. Don't design for something where you don't have data. You will be happier.
One of the things I do besides explaining best practice is people call me when things are broken.
I get a lot of that.
I was trying to figure out how many times I've been asked to do that. Probably somewhere between 150 to 200 customers I've met with to talk about how to make changes and how to walk back some of these things and how to move forward and how to repair.
Once you're in production, it gets harder. You all know this. Refactoring with a living, breathing system is like changing the spark plugs on a car without turning it off. It can be done. Not your favorite way to spend your time. So careful is good, don't be afraid. Mistakes, minimize them. You'll still make them, but that's okay. It's not the end of the world, you just got to roll up your sleeves and fix them.
Identities, somehow I just ended up spending 75% of my time working on identities because it's something that people don't always understand and the most I'll test sometimes. I'll go and sign up on a customer website, get my email, and then I'll go and create a second account and get my same email address, and it lets me. Then I ask the customer, "What are you using for identity?" They'll say email.
But you let two different people, me both times, give the same email address.
That doesn't uniquely identify a single person. That is not an identity for you because you don't enforce uniqueness.
Now a lot of those websites, I go test it and I get the error, my favorite error. No. That's already in use.
That's a good identity. Phone number two, my grocery store.
I'm still my phone number that was my landline that I got rid of so long ago, but it's still my phone number. And every time I go to the grocery store, still punch in that phone number that I'll never forget because I want that Safeway discount. Okay. Not all identities are the same. We have identities that are people identities.
An identity is the concept and the value. Email is the concept. We call it namespace. The value in this game is we have a couple, Jane and John. They're both email namespace, but the identity is always the combination of the two, the namespace, the value.
Out of the box, remember the thing about field groups? We've already built a bunch, please do not reinvent the identities we've given you.
I've seen it, people go off and create their own emails, no validation.
It's like, oh, man, come on, garbage in garbage out, still true.
Use the ones that are out of the box, make sure it's unique for you, and don't get lured in on email. We love to use it as examples because everybody understands it. You'll see all of the documentation. Make sure it's unique for you, and it's never shared, and it never can be shared by two different humans. And that's really what this is all about. How do you uniquely identify a person so you know it is going to be Kevin and it will never be Sunish or Joel. That's all. It's that simple. No identity is an identity unless it does not point to a single person.
Excuse, ZIP codes, hotel IDs, right, conference IDs, whatever. Those are also identities, they are not people identities. They do not go into a person's identity list, identity graph. They are used for lookups, and that's all they are. If you get the class right from the previous page and you choose the right class for non-people data, you can use a non-person identity. Now notice if you and I buy the same thing, we bought the same SKU, but it will not merge our profiles, right? But if we have the same email address, we're saying, "Oh, it's the same person," and now we've just taken two people and made them one person. So be careful with your identities. They're hard to back out. It's gotten easier. Actually have a feature now, data life cycle record delete, where you can finally go in and say, "I did not load the right identities. This has got a bad format. Please put field validation in your schemas, min max length." I get a lot those meetings, people that need put evaluation. We have regex, you could put patterns, all lowercase, all uppercase. I get in a lot of meetings where people are backing out data that they failed to put validation. You can go right in the schema, you can put min, max, best practice, don't forget it. Or you will be having a project to go scrub AEP of bad data that snuck in, and it always does. Somehow, I don't know how, it will find a way in, like cockroaches.
Best, best advice of all kinds.
Remember I said 30 years, a little bit more? ER diagrams were invented before I started working in computing and they still work...
And they're actually more important for AEP than they are, I think, for relational, and I worked in relational a very long time. It's a forest for trees problem. When you're sitting on one page working on a schema, it's really hard to understand how it might relate to the rest of your data. It's critical you understand how it relates. We call this job the data architect. If you are the data architect, where is your ER diagram? If you want a meeting with me so I can help you out of a problem, I'm going to ask to see your ER diagram and it better not be a onetime thing, like, "Oh, we created it, but then we never updated it again." This is your Bible. This is your social contract with the data engineer who's going to bring data into your platform, as well as the marketing team that is going to be able to use this data to create their audience definitions. This is at the center of those three jobs.
Now it's a little bit of work. You got to find a tool.
I got a coworker who's got a very cool tool, POC right now. Well, he'll draw a diagram from whatever sandbox you pointed at.
I am geeking out so hard on this thing. I've just been, hey, let's do this one. Let's do this one, and these diagrams are popping up. What's interesting is, with permission, we've done this for a few clients, and in the diagram alone, we saw mistakes in their data modeling without the-- Oh, they liked it.
We didn't have to look at the schemas, and I had already looked at the schemas manually. Had done a manual inspection of their entire sandbox. I spent six hours reviewing things like data models. Sebastian pulls up his tool, pulls up the chart, he looks and goes, "That looks wrong." Like, "Dude, where were you a month ago?" Can't wait to see this in the product in some form. Very excited about this idea. Remember the three classes. Individual traits, first name, last name, birth date, loyalty points balance, just for no other reason, just because we did, once upon a time, somebody gave these colors and we've been using these colors mostly across every project I've been in. The green, these are person traits. These are things you can edit and update. Yellow, we use for events. Once they load them, that's it. Immutable, can't edit them, and then lookup data. It's useful. It's context. It helps you with questions of...
Did someone buy a particular brand of Cola versus all the different SKUs you'd have to list in the audience definition. You'd rather just say Cola brand A. So can't emphasize this enough. This thing is gold, and it's really important, and especially think of over the time. Think of the people who've worked on your project over time. They've come, they've joined, they've been on the team, they've moved on.
This is your continuity, this is your context, this is what's going to let someone else six months, a year, year and a half down the road, think what's the next thing we want to put in AEP but not break what we've already got? Gold.
Data ingestion, we're going speed up. Two choices, lots of other choices. Two main choices, going to batch it, going to stream it, if I'm going to stream it, am I going to do it straight into the hub, am I going to do it via edge? Rule of thumb, people traits don't change that much, go get your nightly extract ETL from your CRM system and load them in this batch. Events, don't buffer those, don't try to gather those up and then batch them into the system, just let them rip. You got a website, man, web SDK, mobile SDK, that stuff streams right in, best practice. The only time we do a hybrid mix is if we want to do point in time, which is four, and we have some history we want to load in from as batch. Be careful with the size of your batches. Don't sit there and load nine months of data and think that's going go well if you do it in one go. Chunk it up if you have to. But again, rule of thumb, are there exceptions? Sure. But be prepared to discuss them, be prepared to defend them.
Profile is expecting updates for changed profiles. It is not expecting to load 235 million profiles every day, 95% of it which did not change from the day before. You can. This takes about two or three hours to ingest into profile service. Do you know what else it's doing why that happens? Nothing. Absolutely nothing. It is focused and it's going to load this data and it will load nothing else until this is done. It's really best practice to give us deltas. If you're struggling with extracting only deltas, talk to me. We got some solutions that we can offer for delta only if you can't make that work in your ETL.
Audiences.
We're going to keep giving you choices around audiences. This conference, you heard more choices around audiences. We all start with the one on the right. Yeah. I'm going to just go open the segment builder. I'm going to drop some traits in. I'm going to drop some events in, and I'm going to define my audience that way. You can compose audiences, very candidacy. Right? This is part of AEP. This is part of how you can define an audience, and it's based on the data that you've loaded in profile service. What if you don't have all that data in AEP and shouldn't have it in AEP? What if most of that data is in your warehouse and it's just too big to move? All right. Go make some audience decisions in your Snowflake or other data warehouse and just give us the audience and we'll run with it. We'll activate your audience. You don't have to bring all the data if that's not practical, just schedule some imports of some audiences and we'll take it from there.
What if you can't? You don't have the data anywhere, you just don't have the data anywhere to make those kind of decisions, there's publishers that can help you out, and that was discussed at this conference as well. You can partner with suppliers of audience data and leverage that in combination with your data. You can expand the accuracy or improve the accuracy of your audiences using someone else's data.
Go learn your audiences. That's your homework.
Look at that list. It's all documented. You won't learn it overnight. You probably won't even learn it over six months, but knock them out one by one. You've got a development sandbox, go experiment. Learn what they do. They have features. They have benefits. I've got tools at home. I've got chisels and screwdrivers and wrenches and power tools. Love power tools. I try not to use my chisel as a screwdriver or worse, use my screwdriver as a chisel. You have different tools. They all come with different constraints.
Data ingestion has constraints on volume.
These audiences have constraints on the quantity and the number of records. So your responsibility in learning these tools is also how to use them responsibly because when you don't, things break, and then you end up in meetings with me, and I'll try to get you out of the ditch. Just probably less painful just not to go in the ditch in the first place.
All right.
So covered a lot of ground here.
We are all about best practice. We are investing in more on demand training. We are investing in ER diagramming tools, we are investing in inspection, more monitoring, watch this space.
How it gets to product, we're still going to figure out, but the key is we see some opportunity. We like the idea of a safety net as you work, so it'll be a combination of teaching, training, advising, AI Assistant.
You can go on any page in the UI and ask any question about anything I've talked about. It's really good at reading documentation. Better than me, and I've been doing this for six years. I'm teaching myself to ask it instead of going searching for the answer. It's quick. It's right, and it gives me the link I would have probably maybe found or maybe not, so it's your friend, and if you can't use AI Assistant yet because you haven't clicked through whatever the license thing is, get on it. It's incredibly useful. All right, so you heard best practices around goals and approaching implementation. You heard about best practices and implementation. Joel is going to come up and talk about how Adobe has actually implemented that for its own use of AEP.
[Joel Huff] Thanks, Kevin. All right.
So this is me. I've had the pleasure to work with Adobe as a customer of AEP since the very beginning. Adobe was actually customer zero before AEP was live to external customers. They've now grown. I'm working with more than 70 different teams around the organization, all trying to collaborate and coordinate in a single unified profile. So we've learned a lot along the way. Excited to share some of that with you.
To level set just who I'm talking about here is our internal customer, the big one, it's Creative Cloud, right? So these are household names like Photoshop, Acrobat, Illustrator, products maybe some of you have used or familiar with. And it's a big business. I think I heard David Woodwine say $17 billion business on the stage this week. So a very large enterprise, big business, thousands of users, and a lot of complexity, especially when they're trying to bring all that together into a common unified profile.
In terms of how they engage customers, it's a lot of what you hear at this conference and I think what a lot of you probably do in your own organizations.
Marketing led growth, what we're talking about is how to get up an ad up into paid media or send email out to drive traffic to the website, and then ecommerce on the website adobe.com driving the subscription business. And from there, once users have the products, they actually use the products themselves as a surface for driving engagement. So Photoshop itself I think has more than 15 different surfaces in the product that can be personalized to drive engagement. When I first started working with this group as a customer, each one of these different parts of the organization had their own AEP instance, right? Makes it a little easier. You can do your own thing. Don't have to necessarily worry about what the other team's doing, but you're not unifying a profile, right? And so fast forward to about three years ago and today, now Adobe is really focused on what they call experience led growth. And this is combining marketing, sales, and product led growth all in a common implementation in a single sandbox on AEP.
The benefit of that being driving that single view of the customer across the full customer lifecycle so you can create a better experience for the user and also drive better outcomes for the business. Sounds great, but then there's some complexity, right? We deal with 50 different product teams, more than 70 different total teams, all trying to collaborate and agree in terms of how the core principles are going to be designed, live up to all the standards that Kevin was laying out, right? There's a lot to that.
So along the way to get to that implementation, we needed to come together on a run and operate strategy of how we were going to drive this successfully.
First and foremost, we wanted to figure out how to drive and measure adoption because we had all these different teams. We need to know what the use cases were that were most important to support and then to have a repeatable pattern so that each different app team could look to what the team before it had done and use that pattern to accelerate their adoption.
From there, we needed to make sure we had a reliable and healthy environment so that they could rely on it for their business. When this is integrated into the app experience, it's got to work all the time. We needed to be able to measure and monitor that with our ops team.
Finally, we had the need for measuring ROI, right? Sunish started this session talking about how important that is. At Adobe, we wanted to make sure we were not only measuring the value we were driving, but the balance of entitlements and cost that was being used. When you're dealing with large daily volumes, you want to make sure that you're staying to the guardrails that Kevin was describing. So we needed to make sure the volume being driven by Photoshop was the appropriate percentage of the total capacity so that Acrobat would have enough volume for what they needed to do.
So beyond just the technology, it's the people and process that were needed to bring all that together. And so we had an operating model to run this at scale.
As Sunish was saying, everything should start with a use case. And so as we're looking at onboarding all these different teams into this common environment, we wanted to make sure we had a menu of use cases that we knew were well supported and can work to drive the business. A specific example we have is very much aligned to the example that Sunish was sharing. So Adobe isn't that different from what a lot of our external customers are doing. Acrobat Mobile is a really big volume business in terms of number of users. Hundreds of millions of people have Acrobat Reader on their phone. A subset of that have chosen to pay for the subscription that gives them special features.
So what the Acrobat team really wanted to be able to do was when someone had made that choice, that subscription choice and signed up, very quickly...
Direct them to features they know from data science modeling that when users use this feature in the first month, they are much more likely to renew their subscription.
So by having those insights in the profile, we're able to create an audience, as Kevin was laying out using all those options, that can use the combination of the behavioral data, the subscription data. So within that first week of someone signing up, we're making sure they've used that feature and had that aha moment that's going to drive retention and renewal.
And a big part of what Adobe does using the tools in AJO is also AB test the experience. So not only are they setting up a journey they think will work but they're trying different treatments as far as what the user will see to drive that engagement. So users click through on that message and then find that feature that drives retention.
They measure all that of course using CJA, our analysis platform on AEP.
And then it's not enough just to have this use case and have the implementation done, but we need an operating model that supports it at scale. Because this is not just the single app that's running in this environment. We have dozens of them that need to run in harmony with one another so that the system stays healthy for each of the tenants in the system.
So as we got ready to run this at scale, we needed a way to take all the best practices Kevin was laying out and have a repeatable pattern to make sure as each team was coming on live in the system, we would ensure that that team wasn't going to destabilize the environment for anyone else, right? We all know everyone's under pressure to move, move, move and get new implementations out into the wild. But we needed a speed bump to slow the train down a little bit so that we were sure that when Acrobat was going out, it's not going to destabilize the environment for Photoshop.
So what we did in creating this release readiness checklist was we took a lot of the principles that Sunish was laying out in terms of the use case and made sure that was documented in a way that not only the implementation team could understand but also the ops team, right? That we made sure we understood that the testing had been done, that the dependencies had been documented, and that we knew who were the important stakeholders that needed to be aware of the health of the implementation over time.
In the next section we had was a set of checklists for volume of the data flowing into the system. We deal with very high volumes for Adobe's data. We're constantly getting close to guardrails, and so we really need to make sure when a new use case is coming live that that use case isn't going to compromise the integrity of the system for another. Do we need to economize another team's implementation to make room for that next new use case.
So we carefully review all the guardrails to make sure each new implementation is going to be where it needs to be.
And then finally for our support and operations teams, we needed to know that they have that information at their fingertips. So if there ever is an issue, they're ready to take action. They have the insights, they can troubleshoot on their own, and they can know who to bring in to an escalation if something comes up. So our runbook templates have become a really big part of the process for us.
Now in addition to just that go live review, a key thing for us is making sure we have a dev stage prod review process as well.
We want to make sure each new implementation, especially if it's using profile and AJO, because as Kevin and Sunish were both saying, the profile schema once it's in there, you can't edit it without resetting the sandbox. And so we really wanted to make sure each new implementation as it's going into profile is carefully reviewed for those best practices Kevin was laying out.
So we do this for the dev to stage promotion and again for the stage to prod. In that way making sure stage is very stable for all of the different teams to rely on and production we make sure is always stable for the marketers.
That's a lot of process. Process tends to slow things down. So pressure is on our team to figure out tooling strategies to more automate the process to make sure we can move quickly through those review steps and measurement against guardrails and the health of the system. So we've broken down our tooling strategy into these types of buckets. And right now, these are tools that we're really focused on internally for Adobe. But we think there may be value in these types of tools for external customers as well. In terms of data governance, we're making sure that we're checking for data quality as well as the schema integrity across the environments. We don't want a team adding schemas into prod that hasn't already been through dev and stage for example. We want to make sure that people are reusing the schemas that have already been built rather than inventing another attribute that has basically the same meaning.
In the monitoring category, we needed to make sure that all those guardrails are checked in production, that we're constantly staying under the guardrails to keep the system healthy. And we're looking at ways to monitor a use case in the context of a schedule of dependencies across the different services on AEP.
For ROI measurement, we're looking at the adoption metrics for each of these different teams, as well as how efficient we are in driving engagement and the actual revenue associated with the campaigns.
And finally, we have access governance as well with so many thousands of users across the company needing access to the system. We need to make sure that only the key people that need it can have right access to the production environment, for example. Not everybody should be able to create an audience. Most users are consumers of the audience rather than builders of the audience. So we make sure we're auditing that over time to make sure the right people have the right access.
Finally, we have a variety of success factors that we're driving for the program. We're looking at adoption and usage metrics. Making sure we're finding those reusable patterns because we have so many teams we're trying to onboard. We want to make sure that we have a way to measure that they're using repeatable use cases to get scale for the business.
We're also measuring value in terms of the number of surfaces that we're tracking and enabling and the revenue engagement metrics against them.
And lastly, scale and reliability. We're making sure that we can monitor what we need to and that we have the runbooks to support issues that come up, as well as measuring any impact production from any outage that occurs and how quickly we're able to resolve issues that come up.
All right. So that's a quick tour of what we're doing with Adobe. And to conclude our talk, we wanted to just review what you heard in this talk today and have a chance for some questions.
Sunish started with use cases and just how important they are to ground your plan and making sure you have clear business goals with KPIs you can measure and an architecture designed to support it.
Kevin then took us through designing implementation...
With those guardrails and best practices in mind, right? So you're not driving into the ditch, right? Using the right tool for the right job.
And then I concluded with the talk around what we're doing at Adobe and setting up those run and operate teams for success, right? Nothing worse than the ops team finding out there was a go live and they didn't even know what happened, right? Some takeaways we wanted to leave you with.
First and foremost, I'd say technology is an implementation of a business strategy and not the reverse, right? Have that use case in mind, have executive sponsorship so that you have the investment level you need to do the right thing...
And that'll help you stay committed to a target architecture where those best practices can be achieved.
In terms of going from implementation to run and operate, you really do need clear roles and responsibilities in the organization to get scale.
And plan for an incremental roadmap, right? I think both Sunish and Kevin talked to, like don't design for something you don't have a use case for, right? Don't load data you don't know what you're going to do with yet, right? And so have an incremental plan where each step along the way has a use case that you can measure so that you can measure success as you grow.
So remember, do your surveys. I do survey, yes. And thank you, all, for coming. One of the last sessions here before the end of the whole thing. I really appreciate it.
[Music]