[Music] [Michael Broom] My name is Michael Broom. I'm a Technical Consultant at Adobe. Been here for about close to three years now and I focused a lot on AI and ML and RTCTP and how it really helps customers with their use cases. [Keshav Vadrevu] Hello, everyone. I'm Keshav Vadrevu. I'm a Product Lead for Data and Predictive AI in AEP. Cool. All right. Cool. So we're going to break up into three sections today. First, we're going to focus on our approach at Adobe with AI, then we'll jump into the specific capabilities in RTCDP, and then, like I said, we'll dive a little bit outside in AEP to show some other AI/ML applications. So today, I'd like to really impart to you three key takeaways. So as you're leaving, as you're catching your flight, here are the three key takeaways I really want you to take from this session. So first of all, you can use Adobe's natively built AI to do the heavy lifting, to create powerful models across experience platform to better inform your marketing activities. Next, you can utilize the rich behavioral data and AEP datasets with your own AI/ML tools outside of the platform as well, and we'll jump into that. And third, you can quantify the incremental impact of all marketing activities across business and campaign goals to effectively forecast, plan, and optimize your return on marketing investments as well. So those are your three key takeaways if you need to just take those three notes today. All right. So how are we approaching AI? As you've seen through all of Summit so far, AI is a big, big deal and it's our one of our main focuses right now. So for more than a decade, AI has been really a big commitment with Adobe. We've been investing a lot in it. We've been delivering hundreds of intelligent capabilities through Adobe Sensei to enable customers to work and collaborate in new ways, right? So first of all, we have the natively integrated AI features working specifically with other experience cloud applications. Next, we have AI as a service, right? So think about the shared applications and services across Adobe Experience platform, such as predictive modeling and things we'll go over today as well. Then we have Sensei GenAI. So I'm sure many of you have seen the new GenAI Assistant, with an experience platform that's coming out. It's also available in other applications as well. In the last we have Firefly, right? So this is the new ability to create generative images as well as other things to utilize across the Adobe Experience platform and other places as well, right? So that's how Adobe's been approaching everything. But how does this all matter to you, right? So let's chat about how it impacts you and your customers, right? So AI really provides businesses, the insights, and tools to personalize customer experiences, optimize touch points, and then drive customer engagement and loyalty, right? So it's incredibly important across the customer journey. So first of all, personalization. So it really enables you to create personalized experiences for your customers. You analyze the customer data, you identify patterns, and you create these personalized journeys to get in front of these customers, the right customers, right? Next, it's optimization. So you optimize the touch points, right? You test different approaches. You determine the most effective strategies through these different AI services for your customer journey. And lastly, it's about driving engagement, right? So anticipating behaviors, right, addressing issues, and then really improving customer experience with our different insight tools.
All right. So next we're going to jump into the specific capabilities in RTCDP. And we're going to utilize a specific use case with a customer. So I'll pass it over to Keshav. Thank you, Mike. It is a fascinating technology, there is no doubt. But the real question is, how does it help us, right? So we're going to take this example of LUMA, which is a retailer who has online stores, ecommerce stores, as well as some physical stores. And we're going to follow through the examples of LUMA to see how various technologies that we have within Adobe AEP are going to help LUMA. So before we go into the details, let's first take a look at what are LUMA's personalization goals. And they have three primary goals. One, they want to engage with their customers based on their online behavior. They're currently only leveraging the data that they have from their stores and other offline data. So they want to extend that and leverage the online and behavioral data as well. They want to identify who their high valued customers are and see how they can personalize for those high valued customers, so that they get special treatment. We all love special treatment when we are high valued customers for certain brands, right? Now about all of these, they also want to be able to increase their total addressable audience size. So they have some base audience currently and they are looking for various ways to see how they can increase their audience size, so that they could do subsequent acquisition and others. Now before I go into the details of LUMA, I'm going to take a moment to also talk about Adobe's AI/ML initiatives and I'll come back to LUMA and map these two, how they apply to the LUMA's use case. When you look at Adobe's AI/ML initiatives in Adobe Experience platform, we have all the experience platform capabilities and the apps, which is what you see at the bottom layer. On top of which we are having the Generative AI capabilities being built, so that they're available across the platform, across all the apps that are being built on top of the platform. So we're looking at Generative AI as a foundational component. Now on top of that, if you see what kind of initiatives we have on the AI/ML, there can be categorized into two different ways. One is for a set of customers who want to just take what we have inside the product and then be able to use it and then go up and running quickly. They don't care about the details and all the intricacies of how the tech works, how the ML works and they want to be able to just get value out of it immediately. That's where we have done for you slash prebuilt type of ML models, and this is where customer AI, lookalike audiences, and other things come into play. But there is also a scenario and a need for customers to bring their own models and their own understanding and knowledge on the AI/ML space, bring their own communities of data science, and then connect that to the AEP ecosystem. And this is where we're going to also talk about today on the AI/ML feature pipelines, which allows customers to continue to use their own existing models, but plug AEP's behavioral data and other data into their existing models. - Now let's go back to LUMA. - Thank you. Now let's go back to LUMA for a second. So for the personalization goals that LUMA has, LUMA has worked with Adobe. They have an understanding of the capabilities that we have, and they came up with this particular roadmap for 2024. First, they want to start small, so they're going to build propensity scores, and they're going to leverage on customer AI and get the propensity scores out of the customer AI so that they could get a jump start in terms of going through AI/ML technologies and leverage them because their end goal is to make sure we personalize for the customers. So that's where they're starting with the propensity to buy. But then post out, they want to also expand their existing audience. So they're kind of leveraging lookalike audiences that we have an experience platform to be able to expand the existing audience that they have. And the final goal that they have for this year is to hyper personalize. And this is where they're going to bring in their existing models that are in-house and connect them to the behavioral data. So this way, they are kind of taking the goals that they have and mapping them to Adobe's capabilities and they're going to run with it. Let me hand it over to Mike to talk about the customer and how it applies to propensities. Awesome. So first stage of that personalization journey you saw was building propensity models, and that's done through customer AI with an RTCDP, right? So I'm not sure who's worked with it before, but it's an awesome tool. It's actually my favorite. People internally at Adobe are actually sick of hearing me talk about it, because it's been working so well with so many of my customers. But essentially, it's our propensity modeling solution available in RTCDP. You can essentially find the right profiles based on their propensity by products, propensity to churn, things like that, right? And it's very, very simplistic to use. It's very turnkey. You can see some of the benefits of it as it really delivers high accuracy propensity models. Like I said, I've had great results with it. The other really cool thing is that it shows you influential factors. And I'll actually walk through this in a demo, but it shows you a little more of the why behind the model and not just the scores. Okay? And probably the coolest part about it is, this can be added to profiles, right? You can enrich profiles with these scores and then use it in audience building and then send out to your destinations, right? So I think it's awesome. But let me jump to the next slide. And here's the flow of it. As I said, it's a very, very turnkey, very easy to use service. It uses a survival model to really estimate the probability of the event occurring, so like the conversion event you choose, right? We also, it's also very much powered by boosted trees. And it's a five-step process, right? So first, you create your model, right? So you figure out what you want to use based on your data, bring those datasets in, right? Then you configure it, and you train it. So the training, we say it takes up 24 hours, takes an hour or two. It's really quick, very easy to use. And then once it's created, you can look at the output. You can understand if you want to use it or not. And then like I said, those scores are then enriched and added to customer profiles, which means you can then utilize it in your audience building, right? And then send it off to your destinations. And these data sets can be shared across the Adobe experience platform for-- So for example, you can actually share the, customer AI datasets with a scoring to CJA if you would like to use it for further analytics. So a lot of uses for it. This is a LUMA example. So as we talked about, their first step was creating propensity models for their online ecommerce-based customers, right? So for this specific model, my conversion event, rather, was essentially propensity to purchase an online product in the next 30 days, right? And here's the output. Essentially, this is broken up into three key tabs. I think you can see it, okay from there. This first tab is all about the latest scores, right? So all the profiles and the datasets you choose are given a score between 1 to 99. Okay? One, obviously, if it's a conversion model, one is very, very low propensity to buy a product for example. Ninty-nine is, they're probably going to buy, right? So you can see the kind of layout about the scores. And then what's really cool is it breaks it up into low, medium, and high buckets, right? This just makes it easier to digest and understand. And so it's easier just to say low propensity, medium propensity, high propensity, and that's all based on scores, right? So low propensity scores of 24 and below, medium 25 to 74, high propensity 75 and above. And below here, as I said before, you have your influential factors, right? And what these are, it shows your top 10 influential factors for each of these buckets. And this just shows you a little bit of, like, the why behind, why are these customers high propensity, right? And it will actually take the events from your datasets you chose and find those very influential events towards being a high propensity, like purchaser, low propensity. And if you hover over here you can actually see all the specifics behind it. So you can get a better idea about, like, how often are high propensity people visiting the website, for example, or how often are they buying something? It just depends on what the model deems as influential. So it's pretty neat. The next tab is historical performance. So this is really cool because it actually shows you your conversion rates. So based on your conversion events, so this one is buying an online purchase, it'll give you the actual conversion rates overtime as this trains, right? So if you have the training on a weekly basis, for example, every week it'll show you how many profiles from each of these buckets actually made like conversion, right? And then as it starts running for a few weeks, it'll start to understand what's the expected conversion rate in the next few weeks. So this will actually change, and you'll see the expected, over the next, like three to four weeks, conversion rate. And it starts to get more accurate and smarter overtime as it trains. And lastly, you have your model evaluation tab, right? So this just gives you a little more information about how strong your model is. It's broken up into a lift chart, a gains chart, and area under curve, right? So your lift chart is essentially measuring the improvement of using this predictive model, over random targeting, right? And it's broken up into deciles. So if you hover over here, you can see the expected conversion rate for, like, top 10% decile. So scores of 90 or above, for example, right? And then your gains chart shows you the cumulative gains from using these propensity models. Again, it's broken up into deciles, so if you hover over this, you can see, like, targeting customers with a score of 90 and above will target or capture 91% of conversions, for example, right? And you can go along this and kind of see. And this is all just demo data, so if it looks a little funky, that's why. Area under curve, just measuring the strength of it. So this one's a 0.97. So closer you are to one, the stronger your model is, right? This one has the warning details just because it's a demo sandbox, so don't worry about that. But I tried to go for, like, a model over 0.75 for example, and that usually deems a stronger model.
So as mentioned earlier, what's really cool about customer AI, as you can then use it within your audience builder, right? So this is, when you enable for profile, it's added as an attribute, these scores, to your profiles. And so you can simply go to the audience builder and find it, or within the output, you probably saw it back here, you can click on create segment and it will take you right to the audience builder and pull up the scores for that specific bucket. So this is meet like a low propensity, for example, as it churns.
Yeah, so less than 25, right? So it automatically adds it in there for you. And then if you'd like to, you can start, like, layering on all the other attributes or events as you so please. So this essentially strengthens your audiences, is how I like to see it, right? So you can add in, like, loyalty tier, for example. I only want to target gold members maybe who have a low propensity, right? So, yeah, that's customer AI, as you can tell, I'm pretty excited about it.
But, yeah, all right, I'll kick it back to you. - Cool. - All right. Thank you. Before I jump into lookalike audiences, I'm sure a lot of you have questions on Customer AI, but please do hold off to that. We'll come back to questions at the end because we do have a lot of content to cover. So let me switch gears over here and talk a little bit about lookalike audiences. We kind of looked at LUMA's problem at the beginning, and what we're trying to do here is to allow customers to take a set of base audience that they have and then go across their entire profile systems that they have and then see what other audience are available that are very similar to the base audience that is being selected. So that's what basically lookalike audience does. So the end goal here is audience expansion. But the way it does is it leverages AI/ML so that the entire process can be, one, automated, and two, it can learn over a period of time and then get better and better at kind of identifying more similar profiles, right? And it also honors all the trust and governance settings that you have. So if you have any profile attributes that you have market for not to be used in data science, then, lookalike audiences does honor that. Similarly, if you have any audiences that you don't want to be leveraging for data sciences and they would not be eligible or accountable for the lookalike audiences. So it does honor all the trust settings. Now let's take a look at the quick terminology, so that you can follow along with me when I do the demo for the lookalike audiences. So first of all, we have something called as a base audience. This is something that you just created in the CDP world within the Adobe Experience Platform Canvas. So it's-- any audience, there is no limitation as such, but it cannot be an externally ingested audience. It has to be something that you have created in the CDP. Now, besides this, we also have what we call as a lookalike model. This is the model that comes out of the box and is available for you. You don't generally interact with the model directly. It's just available there for you. But it's not looking at your data directly and it's not creating any audiences by default, right? So the way it creates the audiences is where we come to the next terminology over here, which is what we call as lookalike audiences. So what happens is in your environment, the lookalike model is looking at your profiles, and it's creating what we call as influential factors and it's trying to figure out what is the similarity of across various profiles. But it does not create a lookalike audience until you explicitly ask it to. And when you specify, how you want the lookalike audience to be created, then it manifests into an audience of its own and then that's what becomes the lookalike audience, right? Now, finally we do have the total addressable audience size, which is the total amount of audience that you have within your CDP system. And that is obviously the difference of the total number of base audience that you have and the lookalike audience across the last 30 days. All right. I don't take-- Quickly switch to a demo, and let me walk you through the lookalike audiences. I'm going to go back into the CDP here. And what you'll notice is, I have audiences, and in this audience, I have a base audience called as loyal diamond users. So these are the loyal customers for LUMA, and these are all of the diamond tier. Now when I open this audience, you can see the audience size is about 11k, and you can see the sample profile. So this is all just existing profile and audience related information. But within this audience's screen, now you will start seeing these lookalike coincides. So this is something that we have just rolled out a couple of weeks ago. So you may have not seen this previously and you will start seeing it in your environments now. So if you go into the lookalike or insights, what it does is that, it's looking at your existing profile data and it's trying to build you the similarity versus reach graph. Now this is all synthetic data, so that's why you see there are sharp curves in this graph. But in a realistic world, you would see a more curved approach over here, cover graph here. And what basically this is showing you is, if you want to build a lookalike audience that is x person similar to your base audience, then what will be the reach for it, right? And as, you go across your similarity graph, and you want your lookalike audience to be very similar to your base audience, the reach is going to usually decrease, right? That's usually how it breaks. And that's where you would see this graph. And you can pick and choose, like, what happens and what is the kind of reach it's going to be. So for example, if I go with, let's say I want to go with a similar 15%, you can see in the hover or the tooltip that the profile count is approximately 44,000, which means if I create a lookalike audience with similarity of 15%, the reach is about 44,000 profiles. Okay. Now what you would also see right below this is the top influential factor. This is something that you have seen in the customer AI and lookalike. Your audiences also uses a similar concept. It's looking at all the profile attributes and it's trying to figure out what are the influential factors that will affect this similarity and reach. And you can see that there are certain profile attributes that it's looking at and these are given high importance when the lookalike audience is built. And there would be some medium level importance attributes and then there will be some low influential factor attributes as well. So it's taking into account all of these attributes and each of them have a different weightage as it creates the lookalike audience. Now the creation process itself is extremely simple. So all you'll have to do is in the lookalike, insights, tab, go and click on the create lookalike audience and you'll basically see the same graph so that you can take one more look at what is the similarity percentage that you want. You can adjust percentage over here. Let's say I'm going to bring it down to some 20% or 19%, and you can see that this is the percentage, and the profile count is approximately-- Oops. So the profile count is approximately 45k. And what you can see over here is I can give a name to this lookalike audience so that I can distinguish it from the base audiences that I'm creating. And then you just click create and then that's it. And you will see in some time that it gets shown up over here in the lookalike audiences. Now do note that depending on the amount of data that you have, the similarity and things like that, it may take up to level 24 hours for the lookalike audience to actually gather all the audiences because it has not manifested yet, right? You just kind of created the audience right now. But in the interest of time, I'm going to switch over to a lookalike audience that I have previously created, which is what you see over here, and it has a 19% similarity. And when I go in here, it's very simple to use. You can see that this is the name that I have given, and it has a 19% reach, and these are the sample profiles that I have and, that are associated with this audience. And from here on, you could just use this audience like any other audience that you have created. There is nothing really different about it. And then you can just create further audiences also if you want. So you can have a 19% similarity, and then you can have another lookalike audiences that is, let's say, 30% similar, and so on and so forth.
All right. So with the lookalike audiences, that's primarily oriented towards the marketers who can quickly take the technology that we have available and then quickly build audiences, right? Now I'm going to switch gears a little bit and I'm going to talk about what we call as AI/ML feature pipelines. Now this is a capability that is provided for customers so that they can bring their own models that are available in other ecosystems and other platforms and connect them to the AEP and CDP ecosystem. So what you will see over here is that we integrate with most of the vendors that are out there. So your model can reside in any of these environments, whether it be Databricks, DataRobot, AWS SageMaker, and whatnot. And that model can interact with the AEP behavioral data. Now this is a huge deal because what we hear from customers is that, more often than not the offline models that they have are relying on offline and transactional data, but they don't have access to behavioral data. And this is opening up access to the behavioral data for the first time. So that's why these models can be augmented by the data that is AEP and I'll go in a subsequent slide in terms of what I mean by the data over here. And what are the limitations on the data. But, basically, you're getting access to the rich behavioral data that is available within the ecosystem of CDP for the models that are residing outside of the CDP and AEP. Now not only this, we also provide a bidirectional access, which means as these models run-in production, as they start scoring and inferring in real-time, you can bring those scores and any other attributes that are coming out of these models back into the CDP, whether they be propensity scores or whatnot, and load them as profile attributes into your CDP so that you can do activation on top of it. So once those propensity scores come into CDP, we pretty much treat them same as the propensity scores that are generated by Customer AI or any other mechanism. So we do not differentiate on the propensity scores. So it provides a seamless way to be able to take the behavioral data that you have in AEP and CDP ecosystem into wherever your models are as well as to be able to bring the AI/ML scores back into CDP so that you can perform activation leveraging those. Right. Now I'm going to spend a moment, what kind of use cases are we seeing customers leverage these for. Now what we have seen is customers are preferring to have a combination of AI/ML feature pipelines and do for you, done for you models like Customer AI and lookalike audiences. So they're not choosing done for you versus bring your own, they're kind of using a combination of both. And there are use cases wherein I'm going to pick one use case related to travel and hospitality, where we have spoken to a customer who are leveraging their in-house, model and they are using that model purely based on their offline and transactional data to support the personalized search results when someone comes into their website and is searching for something, whereas they are using a combination of their in-house models with behavioral data in AEP and CDP, along with their other transactional data as a combination to drive the personalization for the newsletters. So it can always be driven by the use case. And depending on the use case that they have, you could use any combination of where you want your model to exist and what type of data you want it to have access to.
All right. Now let me take a moment to explain how it affects the ML journey that you have usually and what are the various touchpoints that the AI/ML feature patterns have with those ML models? So what you would see here is, whenever you're building a model, there is usually a training phase. And during the training phase, the most important aspect is model training, so which means it needs data exploration. This is where AI/ML feature pipelines will provide you the ability to be able to explore the data that is in AEP and CDP without having to extract it first, which means, you can look at the data, you can query the data, you can analyze the data. You can have data scientists go and browse through the data and figure out what data is relevant for your ML model and then only export that relevant data out of AEP into the ecosystem of your ML models and then be able to leverage it for training purposes, right? Now in the scoring phase, usually what happens is you have trained your model. It's all ready to go. It's been deployed. It's running in production. And you want a seamless bidirectional data transfer to be happening with this model. Now this is where you could export this data using our destinations framework to go to your ML model on a, whatever frequency that you choose. And you can have your ML scores come back to the CDP, so that you could activate it based on the scores that are being generated. And, again, you can bring those scores back either in batch mode or in a real-time fashion if you're using a real-time CDP. Now I know I'm talking about all this access to data in AEP and CDP ecosystem, but what type of data can these models now access, right? So it kind of depends on the type of integration that you have with your CDP and the type of data that you have within the CDP. But what you can get access to is web and behavioral data. They have been mobile behavioral data, right? So this is very rich data that most models leverage on. And in addition to that, you also get access to merge profile union view and AJO inbound and outbound events, as well as any commerce data that you have. So we are not placing any restrictions on the data that can be leverage it for the model. So pretty much if you have the data within the AEP-CDP ecosystem, you can use it for models. So this opens a lot of data for ML models that was not available previously.
Thank you. Now I'm going to spend just a minute on the details or, in architecture, if you will, of how the system works, so that you could follow along with me when I do the demo, right? I'm not going to write any code in the demo. I'll just kind of use some notebooks and transport you into the world of data scientists, so that you can see the power of how the data can be leveraged. But what happens in the AI/ML ecosystem is the following, right? Within the AEP-CDP ecosystem, we have behavioral data, which is often available only within this ecosystem. And customers are often bringing in offline data and transactional data, CRM data, and other types of data into the AEP-CDP ecosystem, so that you could use it for activation and other purposes. So there is a mixture of all this data available within the ecosystem. Now you can take all of the data and then you can do, one, feature transformations as well as data discovery and data exploration. And while doing that, the new capability that we're bringing with this AI/ML feature pipelines is that you can use notebooks to be able to query, browse, and explore this data. And data scientists love notebooks, right? And these notebooks do not have to be within the AEP ecosystem. They can be in whichever ecosystem that you currently already have in your organization. They can be within your Databricks. They can be in your AWS SageMaker and whatnot. So you could directly use notebooks in that ecosystem, point them towards AEP, and then start browsing and exploring the data, which is a very significant capability in getting understanding of all the rich data that you have. Now once you do that, the model itself resets out of the AEP, but then we leverage the destination's framework to be able to push the data out to your model using the cloud storage connectors. And then as the ML scores come, they will come back into the CDP using a regular ingestion process. So you could use any of the existing 100 plus batch or so streaming connectors and bring the data. And as the data comes in, the ML scores come back in. We treat them as profile attributes, so you could use them for any of the AEP app, CDP, as well as others, right? Let me do a quick demo of how this would work. I would not show you the end-to-end workflow. I think that would be way too complex. So I'm going to focus a little bit on data exploration just to give you some visibility in terms of how the data can be leveraged in NodeBit while the data itself is residing in AEP.
All right. So I'm going to start with the following. So there is a GitHub repository that is publicly available. This is actually available in our documentation as well, so you could go and take a look at that. And in this we have published a bunch of notebooks that are available as samples if you want to explore on your own and, try out this in your own environments, right? So I'm going to use some of these notebooks. So we have five notebooks that are available. And I'm going to primarily focus on one and two here and predominantly on one. So in the first notebook that we have published over here, what we do is, we show you a sample of how you can connect to AEP. And you can see over here on the left that we're connecting to AEP. We're creating a whole structure of schemas, datasets, and so on and so forth, which I'm not going to go to. But what I'm going to do is I'm going to directly jump into data exploration for propensity models. So let's say, let's assume in this LUMA example, we already have data in AEP, that is already being ingested through various source connectors, batch of streaming, and it is being leveraged for CDP purposes. Now I'm going to just show you how a data scientist can use a notebook to connect to these, right? So this is where you will notice that we have already published a framework called AEPP, which is an AEP Python framework that data scientists can leverage in their notebook. So you will see that this notebook itself is currently connecting to the AEP ecosystem and querying the data. So what we have over here is we're just creating a new session and we're trying to query all the datasets and looking at all these datasets, and these datasets can be queried with what is called as a table name. If any of you are familiar with data distiller or query service, then it's the same table name that you can leverage over here. So your data engineer and data scientists are basically working of the same source of truth. Now as we query the datasets, if you want to just kind of get access to the data, you will see that you will get simple access to the tables in the notebooks. And I'm running a query on specific dataset, and I'm getting the data back in my notebook. So this notebook itself is, as you can see over here in DataBricks, but I'm querying against a CDP ecosystem. Now when there are complex fields like arrays, JSON, objects, and so on, you can convert all of them into JSON, and then you'll be able to query them as well. So it's not limited to only traditional primitive data, but it's also available for complex data as well. Now I'm going to jump a little bit forward over here and then show you how complex this can get. So we're doing an email funnel analysis where for, in LUMA website, customers can come in, they can look at various products, and those who like the products can register for a newsletter, And I have a notebook over here that is basically doing an email funnel analysis to see how many folks are registering for these newsletter and how that funnel is looking like. So here you can see the data is actually being created from AEP, but we can do a significant amount of advanced graphs and charts to be able to look at and understand this data. So if I have a funnel over here that's looking at the email sent to events that are coming from AEP, looking at how many of those correlates to the email opens and then email clicks and then finally finding out how many of them have actually subscribed to a newsletter. So all of these are raw events that are sitting within the AEP as experience event datasets, and we're just looking at the dataset and then looking at all these events and try to correlate them in the same notebook. So I'm not going to go into the further details, but I'll just show you one more chart that kind of shows you the power of how this works. So this is where you can see how each of the events that we have are being correlated to other types of events. So what is the first event that a particular user or profile has performed versus what is the last event that they have recently performed by giving a given timestamp, and we're looking at 30 days here. And we are trying to correlate these events together to see, okay, how are people arriving at a newsletter subscription? Are they starting with filling out a form directly or are they having a correlation with something like an email and so on? So if I see over here as an example of a form filled out, there is a direct correlation for email open, and then there is a direct correlation for the email sent, which shows me that most of the newsletter subscriptions are coming from the emails and people clicking on the emails and then eventually come into the subscription. And depending on the advanced data science teams that you have, this could go further deep into however you want to explore and analyze the data. But the key takeaway over here is, all of this exploration is happening with your data still sitting within the AEP, CDP ecosystem, so you're not moving your data back and forth across multiple ecosystems, right? So you query your data, you understand your data, and you just extract what is needed. And you can see that in this particular, notebook, which I'm not going to go into too many details yet, but it will basically give you the ability to be able to create queries and then create datasets as a result from these queries. So those datasets can be enabled through the destinations, export, and you can take them to any cloud storage connector and any other of ecosystem of your choice. Okay. All right. Now this is a bit advanced concept, so I'm going to pause over here. He's a smart one. All right, great. So we're going to wrap it up here and talk about some other AI capabilities across the Adobe Experience platform. So you've probably seen this during summit, right, so the AI Assistant, an experience platform. So this really allows you to take and perform a lot of tasks at a very fast pace, right? You can quickly access Adobe knowledge concepts, you can review information about your data, you can do a ton of stuff. I'm sure you guys have seen this in keynote, but it's going to be game changer. Absolutely. Next, we have predictive lead and account scoring. So this is essentially the B2B version of what I went through at customer AI, right? So it's essentially a propensity to buy scores for leads and accounts. It's based on the demographic, firmographics, and behavior traits. And so I know for B2B especially, this can be extremely valuable. And so yeah, it's really good. I've lost my train of thought. I'm sorry. It's almost 2 o'clock on Thursday.
But, yeah, so a lot of the very similar tech for customer AI is in this. This is just more of a B2B version.
Next, we have Send Time Optimization and Predictive Engagement Scoring. So this is an Adobe Journey Optimizer, AJO, and Adobe Campaign. So essentially, they use advanced AI and machine learning to enable companies to optimize the design and the delivery of customer journeys, right? And you can really predict individuals' engagement preference as well. And so predictive engagement scoring is available in Adobe campaign and then AJO has send time optimization. So there's some really cool AI features, especially in AJO. AJO is coming out with a lot of AI features in the near future as well. So-- And the next, Adobe Mix Modeler. So I'm sure some of you have seen this, but it's AMM. Essentially, it's our unified measurement and planning solution and it's really unifying and harmonizing that, media mix modeler and MTA solutions.
It really-- It's really a game changer. All of these are, obviously. But this really helps you understand how my channels are performing, right? But on top of that, it also can show you, your insights, but also can show you scenario planners as well, right? So it lets you build and compare the impact of budgets, so you feel confident with your marketing allocation. So you can take in, like the spend you have in external platforms and really understand the ROI and then predict with the scenario planner as well. So a ton with this. It's really, really cool. And then next we have Assistant Customer Journey Analytics, or CJA. So very similar to what you saw with assistant and AEP earlier. Same thing's going to be coming out for CJA as well, where you can essentially ask a lot of questions, CJA, just like you would in AEP and get similar results, right? So that whole assistant methodology and tech being powered by Adobe Sensei, is going to be available in multiple applications moving forward, so keep on the lookout for that as well. All right. Well, I think that wraps it up.
If you have any more questions, you can feel free to come up and talk to us, but thanks, everyone, for joining. Thank you. [Music]