Under The Hood of Adobe’s Gen AI Innovations

[Music] [Meera Srinivasan] Wonderful Wednesday. Welcome, everybody. It's fantastic to be here in day two of Summit 2025. Always a pleasure reconnecting with our customers, partners, and analysts. So thank you for being here so bright and early. Hope you all had your first cup of coffee. I had mine. And for those joining us online, thank you for-- We are thrilled to have you join us virtually. So how many of you have seen all the big announcements and what were your reactions to day one? Awesome. Did you hear about the agents and the agent orchestrated? Did you soak it all in? Because this is a session where we are going to go under the hood of it. And so let's dig in. A lot has happened since we launched Adobe Experience Platform AI Assistant at Summit 2024. Built right into your application workflows, the AI Assistant helps practitioners understand product concepts, gain data insights instantaneously to questions like, how do I double my audience sizes? Or even troubleshoot in a self-serve manner to issues like, why are my journeys not triggering? So in just under a year, we have hundreds of customers and thousands of users embracing AI Assistant, transforming how work is done inside of AEP and apps. So before AI Assistant asking data related questions such as, show me my duplicate segments? Or what are my best performing experience variants? And most importantly, why, used to take multiple steps, navigate dashboards, and even wait for an analyst. But with AI Assistant, natural language questions and data insights out instantaneously.

And we are happy to hear that customers who have tried it have mentioning that what used to take hours and perhaps days waiting for an intermediary, now just a matter of mere seconds. And these data insight responses are fact based. They are precision-oriented. They're grounded in customer data, and they serve with full transparency, explainability, and data provenance. Now we are going into the next era of GenAI innovation inside of Adobe Experience Platform, the Agent Orchestrator, which is going to be the focus of the session.

So I'm Meera Srinivasan, Product Leader for Adobe Experience Platform, and I'm delighted to be joined by Horia Galatanu, who leads our GenAI/AI Initiatives. Together, we are going to break down what Adobe really means by agents because let's be honest, there are multiple definitions floating around. We are going to get under the hood of the Agent Orchestrator, peel the layers, and most importantly, what it means for you and your business through real-world examples. So let's jump in.

Now why agents? On day one main stage, Anil Chakravarthy had highlighted a pivotal shift. Customers in the era of conversational AI now demand and expect two way real-time hyper-personalized interactions that they are in complete control. So to meet these customer expectations, we must usher in the era of customer experience orchestration, where content, data, and journeys seamlessly work together to deliver dynamic, intelligent experiences.

But what should customers do in order to deliver customer experience orchestration at scale? What does it entail? It means orchestrating billions, if not trillions of personalized experiences while respecting customers' privacy and preferences amid tight budgets. And it is this tight budget that is stretching teams thin. Content teams are not able to produce contents fast enough to meet the growing demand. Website teams have to manage thousands of pages but bandwidth constraint means many pages are outdated or underutilized. And marketing teams aim for hyper-personalization. But scale constraint means they end up sending broad generic campaigns missing opportunities for deeper customer engagement. And that is why, to help your customer experience orchestration scale, we have introduced 10 intelligent, purpose-driven digital agents across the entire customer experience lifecycle, from planning to data management, to audience optimization, to journey optimization, to experience optimization, to experimentation, to performance analysis, and much more. And this is just the beginning. And these make work smarter, faster, and more effective. They are pre-trained at Adobe for a long period of time. They're quality hardened by extensive betas and their turnkey. Once launched, these agents run seamlessly within your applications. Now let us break down Adobe Agents. So I would like to highlight four key characteristics of Adobe Agents, and all of the agents pretty much follow these four key characteristics. Number one, paramount to us is human-AI collaboration. So the agents collaborate and co-ideate with the practitioner inside of these application workflows just like a teammate. It is not human versus AI but it is AI plus humans. So let's take the example of Data Insights Agent. It takes natural language, analytical queries, and transforms that into powerful visualization. Not only that at every step, it allows the practitioner to steer AI-driven decisions, readjusting the visualizations, refining the analysis. And Horia is going to dive deeper into showing how the Agent Orchestrator interacts with the human in great detail through his demos. Number two, always on intelligence. These agents are constantly running in the background, proactively surfacing optimization opportunities and flagging risk and issues before they impact the performance. Managing thousands of audience segments is no mean feat. So teams spent hours sifting through audiences, identifying overlaps, duplicate segments, monitoring audience health. This is where Audience Agent really shines. It proactively monitors audience health, as an example. That's one of the agentic skill. And it shows notifications right inside your AI Assistant inbox, which is a new feature. Not only does this show issues but you can drill down into the issue. It shows detailed root cause analysis and guides on next best actions and even completes those so that you can increase your audience engagement and eliminate any audience fatigue. So you can go to the GenAI Pavilion to see this in action.

Now agents not only assist but they also action. They perform work on behalf of the practitioner. The site optimization is an example that I would like to illustrate. It's a game-changer for marketing managers and product owners. It not only identifies but also fixes page performance issues, SEO efficiency issues, security risk, and accessibility gaps, and much more. And it saves a ton of time for the website teams. Lastly, AEP Agents possess deep and broad agentic skills, and that helps them orchestrate across cross application workflows, breaking down application silos. So take the case of the content production agent. It reads a marketing brief like a pro, gleaning what's the who, what, and where to generate content that is relevant and tailored for that specific audience segment. It also guides and optimizes experience layout and formats to suit for that specific channels. It knows your brand inside out. All the brand guidelines that you supply, it learns through it. And it's able to create a comprehensive brand object so that it can check for consistency. So if the content, whether the content comes from studios, agencies, or it's generated by AI, it reviews its course, and it provides real-time guidance to fixing the content. And then when it's ready, it sends to Adobe Workfront app for approval.

So let's move on. We quickly saw how individual agents supercharge the practitioner. But just like we work in teams, the digital agents also work in teams. Take the case of the Data Insights Agent. It surfaces journey bottlenecks and drop offs contextually within the journey canvas and then collaborates with the journey agent, which in turn collaborates with the practitioner to adjust, simulate, and fix the journeys. You can see this in action in today's Wednesday afternoon Unified Customer Experience session. So the Adobe Agents basically is a talent multiplier. It feeds your teams to focus on what matters most, which is creativity, differentiation, and growing the business. Now agents aren't just for expanding experience maker capacity. They also reshape how consumers and accounts and other teams interact with the brand. So we just announced yesterday Brand Concierge, an agentic new application. It's a conversational interface that is available on a Brand's web, mobile, and messaging channels. And it's two way, real-time, and highly-personalized. And with Brand Concierge's robotic interactions, the repetitive, irrelevant messages associated with traditional chatbots are a thing of the past. Why? Because conversations don't happen in silos. They require understanding the full customer experience lifecycle. And for that, the Brand Concierge application has two agents. The product advisor agent, which is B2C. It helps the consumer identify and discover the right products to meet their needs. And then there is an account qualification agent, B2B, that is nurturing the account, the buying groups, and the leads, and accelerates the sales cycle. So these agents know at any given point in time who the customer or the account is, the entire set of experiences they've had with the brand, and where they are in their journeys. And how is that even possible? Because they are integrated with real-time unified customer profiles. So if a customer comes requesting to check for an order, the Brand Concierge application immediately understands that the order is delayed. So instead of bombarding the customer with promotional messages, it either routes it to a support agent or offers a discount or better still, looks for faster shipping option and recommends that. And all of this data and the signals that is contained in that rich conversation is also-- The insights are gleaned from that, and we perform longitudinal linguistic analysis to understand the intense sentiment. And this data goes and enriches the real-time customer profile in real-time. So what that gives you, the result is a dynamic, intelligent, fully personalized journey, and that's a big game-changer. So that brings us to the third pillar of the agentic strategy. Now this has been a conversation that I'm hearing in the last two days. We know that you're also on your own agentic journey. Either you're looking to build agents or are already building agents. So we are exploring multi-agent collaboration of how our agents and third-party agents, whether those agents are built on top of Agent Orchestrator or in any third-party agentic frameworks, can interact with each other. We see this as a very collaborative space, and we are seeking your partnerships with, we can jointly identify the use cases and further this area.

So we would like to do a quick poll. Where are you in your agentic journey? Are you just getting started? Are you in the experimental phase? Are you scaling up? Are you fully operational? One of the innovators there. Not started but don't worry. Just exploring. So where are you? We would love to know.

[Namita] We have around 50 responses from the room today. Most of you are saying-- Just a second. So let's see. The responses-- Okay. So most of you are still getting started. That's 45% of you who are still getting started. And then we have 19% who are experimenting, 13% of you are scaling up. You're actively building, optimizing agentic workflows, 4% of you are fully operational. You have Agentic AI running in production, seeing the impact, and close to 20% of you have not started yet. You're not exploring yet but want to learn more. Okay. Thank you, Namita. Looks like we have a great mix in the room. Whether you're just getting started or you're already building, we hope that this today's session is useful in your journey. So let's move. Things are going to get interesting. Now let's peel the layer of Adobe Experience Platform Agent Orchestrator. I would like to draw your attention to the purple box, the reasoning engine. This is the brain of the Agentic Orchestrator. It's a new component. Upon receiving a request, it detects the intent, the goals, the constraint, things, and comes up with carefully crafted one or more plans, selecting the best plan at that point in time.

So if something goes wrong, it reassesses and readjust using a combination of reflection and backtracking, which is how humans approach complex problems. I don't know how many of you were in the GenAI Strategy Keynote session presented by Shivakumar Vaithyanathan, our VP of engineering, and Akash Maharaj. They delightfully went over this whole backtracking reflection using an intuitive example. So don't worry if you were not able to attend the session. Horia Galatanu is going to take us through some very interesting use cases that's going to showcase the reasoning agent in action. There's one aspect of reasoning agent that I would like to call out. It's the human-AI collaboration. The reasoning agent presents the plan for the practitioner to review and make any real-time adjustments. So the reasoning agent then executes the steps in the plan. And for that, it uses one or more Agentic Orchestrators. And the Agent Orchestrator is deeply context aware and grounded, thanks to knowledge graph. So it's anchored in Adobe's rich marketing knowledge, customers' data, documents, policies, and procedures so that the responses are accurate, aligned with business rules, and trustworthy.

And then it uses tools to invoke actions within of Adobe applications or third-party applications. And all those results and outcome come back into the Adobe Experience Platform so the Agent Orchestrator is continuously learning and optimizing. Lastly, when it comes to AI, data security and privacy are nonnegotiable for us. Customer data is isolated and protected. Businesses' data is only used for training their own ML models, and nothing is shared.

So with that, to really peel and get into the guts of the Agent Orchestrator, I'm going to invite my good friend, Horia, to take us through this ride, how it works, what makes it powerful. Horia launched the AI Assistant at last Summit, and he is now launching the Agent Orchestrator this Summit and leading the evolution of it. How does Agent Orchestrator handle complex task? How does it optimize decisions and perform AI-driven actions? Take us through that journey. [Horia Galatanu] Thank you, Meera, and welcome, everybody. It's such a thrill to be here and show you the latest innovations that we've been working on. As Meera said, I hope you got a chance to attend Shiv's and Akash's presentation yesterday when we introduced Agent Orchestrator, and then we went a bit deeper into agentic reasoning and customer experience language models. My goal for today is to use three examples to go a bit deeper and unpack each of these layers and really go under the hood. And in order to do that and make it a bit more fun, I'm going to ask Meera to play the role of a curious and data-driven marketer. And, Meera, I loved your own brand with the Adobe colors, so thank you for that. Yeah, channeling Adobe. While I will play the role of a agent-- Sorry, of an expert mechanic.

I love your suit and attire, Horia. Really getting into the character. I'm very committed to this under the hood bit. So as you can see, this took me quite a long time to find.

I have it in my shop is what I meant to say.

All right. Now I'm ready. Before I hand it off to Meera to introduce the scenarios, I just want to say the examples I'm going to show you are focused on the Audience Agents but the lessons are generally applicable. So with that, Meera, how can I help you? All right. I'm excited to be a curious data-driven marketer for an online global retail company. Looma, of course. I have a bold new idea for an engagement campaign. Something fresh, something exciting. But I don't want to disrupt any of my ongoing campaigns. I'm looking for an untargeted audience, fresh, waiting for its moment. So, Horia, can Audience Agent help me find an audience that's not unused in any active campaigns? I think it sure can, Meera, but don't take our word for it. Let's just ask the Audience Agent. So here I have my prompt, what engagement audiences are not used, and I'm just going to ask it. And I see that it came back with an answer very quickly. Let's zoom in a bit and see what this answer entails. So the first part of the answer you can see is a nicely formatted table. I see a list of audiences there. They seem to be engagement audiences. I can also see their size. They're hyperlinked if I want to go dig in a bit more, and I can probably expand that and also download the full list as a CSV file. But the really interesting part is, for me, is also what's under that table, right? Where the Audience Agent has provided a list of verification steps. The first one is telling me what it did try to do, right? So starting with my prompt, it's telling me, "Hey, I'm trying to find segments or audiences that are labeled engagement and are not used in a journey or destination or another audience." And then it's showing me the step by step of how it actually started to do that. And finally, there's a link there if I want to see because in this case, we had to use a SQL query to click on that and see the annotated SQL query if I'm technical and I want to understand even a bit more. So this is a huge part of what it means to answer this question. It's not just the answer but the verification steps as well.

So let's look at one of these in particular.

Meera, it seems like you've created this audience a few months ago, and it's not used in any destination journey. Would this be a good audience for you to use? Now I remember. It's perfect. I created this audience for one-time promotional campaign last Thanksgiving. It had very good conversion metrics, and I totally forgot about it. This seems like a perfect audience for me. But this seems like a very simple inside question. So what's the complexity in answering this question? That's a great question, Meera. And that's a question I want to ask you, our audience as well. What do you think was the challenge in answering that question? And you have a few options to choose there from. All right. Namita, do you want to walk us through the results? Hey. Things are still moving very fast but I think most of you are landing either the first or the second bucket with close to 33%.

Terms are still confusing or unclear in the question. 37% of you think that you don't know where to find this information to be able to answer the question, 23% are not sure how to get started, and 9% of you think that it just takes too much time to answer the question. Awesome. Those are really interesting results. And the truth is that for a marketer, those are all valid concerns, right? Like, they're all valid including, "Hey. I know how to do it but it just takes too much time." For the Audience Agent, though, the first biggest one-- The first biggest hurdle that he has to cross is the fact that the question was ambiguous, as some of you guys, have noted. It was a natural question for a human to ask, right? It was a very simple one, but there's some ambiguity in it. What do you mean by engagement? What do you mean by not used? In order for the agent to actually answer that question, it first needs to start specifying exactly what those terms means, right? So the first thing it's going to do, it's going to have to say that, first of all, I've identified audiences as an entity in that, an entity in that question, and I know that audiences are linked to journeys and destinations, and they're creating using datasets and schemas. So that's the first piece of information that we need to identify. The second piece is that we need to translate not used in a journey destination or another audience. And then engagement is a semantic label that we have applied to a series of specific audiences, and that's something Adobe can do or something that we're allowing our customers to do because we know each of you might have different terms that you're using in your enterprises.

So that's step number one, starting to specify this question a bit more. But I want to pause for a second and say, this is the beauty of an open-ended conversation, right? If we were to restrict ourselves and remove the ambiguity by just answering a few set of canned questions, then you're not going to find it valuable, right? Because you wouldn't feel like you're having a conversation with it. What needs to happen is that you need to be able to ask the question in your own terms, using the words you use every day. And the system needs to be intelligent enough to understand what you mean or be smart enough to be able to ask for clarifying questions.

So now that we specified our question a bit more, comes the next step, which is, "How do I actually get that answer that you're looking for?" In this case, as I mentioned before, it requires a SQL query that it needs to run. And for this, we have a specific customer experience language model that you're using. It's optimized specifically for this task, and it's going to be able to provide us with the query to run with and execute the query locally in your own sandbox using all the constraints around your data. And then we have a reflection step. So because if the answer doesn't seem correct or there's something off with it, we can go back, and try a different one.

Finally, one thing that I haven't shown you previously but it's part of everything that we do is the trust and governance layer. Is the user even allowed to ask that question? And if they're not allowed to, let's say, look at audiences or look at specific attributes in some of those things, do we provide a good enough answer back informing them why the query was not successful. So trust and governance is core to everything that we do. Speaking of trust, how can I verify and validate if the Audience Agent is giving me accurate and trustworthy responses? We are asking a lot of open-ended questions here. So how do we know if the system is actually working? How does it detect that it has made a mistake, and how does it correct itself and learn from it? That's a great question, Meera. And that's core to our fourth pillar here, which is verifiability. This system is going to make mistakes. We all know that, and it's just a fact of life, and I have to live with it. But what you saw in the demo is that we are providing multiple layers of verifiability. So you can check. If the answer is wrong, hopefully, those layers of verifiability is going to allow you to figure out what was going on. And that's not all we have. Behind the scenes, we have a lot of out of scope detection models, so we don't ask the questions that we're not supposed to answer. And we have ways so that the users can tell us, "Hey, you've made a mistake here. Correct it." So that we can recover and move on from there. So that's all part of the verifiability layer. And actually allow me just a few more minutes to go deeper into this because this whole thing is not going to work if you don't trust the answers, right? If you don't trust that this can help you, it's not going to work. So we're investing a lot of time in our continuous quality improvement process. Allow me just to unpack the three steps that we typically go through for this. The first one is quality measurement. So this is where we employ a variety of either human-driven or ML-driven methods to classify all the questions and answers and figure out where the errors are. And in some cases, like in for AI tutor, if we have nuanced answers, we need to pair experts with the questions and answers and make sure that all the nuances are properly captured. Once we have this annotation done, we internally classify those errors into multiple buckets. The biggest one for us is what we call our Severity Zero Error. That's when the answer looks right but it's actually wrong. And that's the one that keeps us up at night because that's the fastest way to lose trust in the system. And after we've classified all of that, we identify patterns and figure out where exactly we can improve the system. And that takes us to step number three, which is quality improvement, right? And there's a lot of things we can do here. We can retrain our models. We can enrich our knowledge base with extra information, like new questions for the question bank. We can tune the reasoning engine. Or in some cases, we just simply have to build a new feature in order for us to provide the answers that the users are looking for. So that takes us to the end of demo three. I want to summarize a few key points. First of all, we believe there's a lot of value in the open-ended conversation interface. But that comes with the problems that we've seen before. So the architecture needs to account for them because the answers need to be correct in order for you to trust them. And most importantly, we have to engage in this continuous improvement process because otherwise, you're not going to use it. Meera, what did you think of demo one? It was super, super interesting. It taught me how to engage with the agents so that it can help me modify my prompts so that it removes any ambiguity. And second, I really like how it checked for permissions that was set in Adobe Experience Platform where I don't have to reset everything. That's the beauty of Agent Orchestrator being part of the Adobe Experience Platform. And number three, I really liked how it detected errors and how it was able to correct itself and learn over a period of time. I like all the tools in the quality toolbox. Awesome. Now,that we're done with that, what else can I help you with today, Meera? Okay. So I have a second challenge. My Monday morning is spent tracking thousands of audiences monitoring their health because every different audience means my spend and reach gets impacted. And manually monitoring is tedious, time-consuming, and potentially error-prone. So can I let the Audience Agent proactively monitor my audience health for me and also suggest actions and perhaps take them as well? That's a really interesting question, Meera. And you get to see a bit more how the reasoning engine comes into play and how some of the interaction patterns are changing as well. And if you'll allow me the analogy of the mechanic, if the first demo was a bit like changing the brake pads on the car, this is a more complex task, like changing the fuel pump. All right. So let's dig in. I'm going to ask you a question. Can you monitor audience health for me? And I see that the system is thinking now. And it's thinking, and it's coming back with a plan. So let's zoom in a bit on this plan, right? So this is really interesting because I gave it a fairly ambiguous task, and it's coming back with a plan. This plan is created by the reasoning engine, and it starts to specify how exactly it's going to accomplish the task that I asked it to. And this is a big deal because it's a lot of cognitive love to think of this plan to begin with. And we know you have a lot of complex tasks that you want to accomplish. So the fact that now I have a plan and I can see that it's telling me exactly what it's going to monitor and what's the significant audience change and how often it's going to run and where it's going to send me the notification. All of that are fully specified in the plan, and it's a big step up compared to the previous question and answer system that we saw in demo one. But I don't like this plan yet. I think it's a great start, but I want to ask for some changes. I know that I have activated audiences and non-activated audiences. I care a lot more about the activated ones, so I'm going to reduce the threshold for that one. And also want to get rid of the small audiences that are probably going to introduce a lot of noise in the system. So I'm going to make this suggestion to the Audience Agent. I see that it's thinking again, and it's coming up with an updated plan. So now I can see step two in the plan is it's going to filter out the small audiences. And then I see that there's actually a fork there, and it's going to have different thresholds depending on only if the audience is activated or not. So here's me working with the Audience Agent to draft a plan that's exactly what I wanted to do. But how can I make sure that this plan is actually correct? So I'm going to just simply ask it in line, right? Can you run this for me? And the system is coming back with an answer and it's showing me two audiences that it has found. One has increased, one has decreased. I can see one is activated, one not.

And now this is a critical moment because now I actually have to turn to Meera and ask her because this is where the marketer needs to come in and verify that, "Hey, this is running according to how I'm thinking that this plan should work." So, Meera, do those audiences look correct to you? First of all, I love the human-agent collaboration. Thank you, Audience Agent. Yes, indeed. They look super accurate to me because yesterday, I spent hours going over and monitoring my audience health manually, and these two were the suspects. So I really like how the Audience Agent instantly spotted these within minutes. This is a time-saver for me. Perfect. And next time Meera logs into her-- So I'm going to confirm that plan. And next time Meera comes into the application that she's using, she's going to see a notification that, "Hey, something has happened with your audience," right? And now I can engage with it and see what has happened. So this is saving-- Like, you're taking one task that you have to do, and now the Audience Agent is doing it for you and engaging with you when something is happening. And we have a continuation of this demo. And if you want to see how agent composer comes in and how we can help you launch this kind of task across your organization, please come visit us at the GenAI booth. We have a lot more in, there for you.

But going back to our demo, let's unpack a bit what just happened because there's been a conceptual leap that happened between those two. So Meera is a marketer. Let's say she's focused on audiences. The first demo that you saw was what we call a first technology leap, right? It allowed for open-ended conversations, and it allowed Meera to ask questions that really help her in her day-to-day job.

But demo two was the second technological leap. Now Meera was able to give it a task, and then the agent performed that came up with the plan. That plan is created by the reasoning agent. It came back with the plan and it executed that plan for Meera. So now she can actually offload much more work to the Audience Agent. Wow. This is super interesting. So are you telling me that I don't have to do question and answer conversation anymore? I can share my goals and intent. And then the agent is going to think and come back to me with a plan and take my input? That, I think, is going to be phenomenal for me. That's exactly right, Meera. And it doesn't mean-- Sometimes you still have questions that you need to ask, right? So I'm not trying to say that the first part is not important. But the beauty of coming up with the task and then coming up with the plan for it is we think there's a lot of value there. And the interaction patterns, as you've seen, have also changed. You went from a one question and answer type of interaction to, now you're working with the Audience Agent. You're defining the plan. Once that plan is locked, you're letting it run on a daily basis, and now we're interacting with the notifications.

So let's unpack a bit the demo, right? So what you saw in that demo, the structure of the demo was that I stated my goal. I got an initial plan, I provided feedback, I got a secondary plan, then I tested it, and then I confirmed it, and now the agent is running in the background. And every day is going to look up, going to run this task and give me notification if something has happened. This is human plus AI decision-making. This is humans plus agents working together to accomplish the task, and that's the core of what our agents are all about. Let's go one step further.

So the first step in the process, as you've seen, is creating the plan, right? This plan was created by the reasoning agent that's infused with all the Adobe marketing knowledge that we have. We know we have complex tasks, and the fact that we can help you get those tasks done by providing you with a starting plan is a fantastic value add. And it's not just our plan. You have to work with the agent and confirm that this is exactly what you want. But we're giving you a starting point, and then you keep defining it. Then comes the execution of the plan. This is where things get a bit more interesting, and some of the diagram that Meera showed earlier comes into play. Because now it's using various things, like agent operators to fetch that information. How do I get those audiences? It's using tools like notifications to inform me, and it has to tap into the knowledge base in order to get that information that you want. And for this to happen, different customer experience language models come into play. As you might have seen yesterday, the reasoning plan comes from a fine tuned LLM coupled with the Adobe marketing knowledge, while some of these smaller tasks can be done by task-specific language models. So we use the best tool available for this, and behind the scenes, it's a collection of language models that help us accomplish this goal. And while that was not present in the demo, there's also a lot of reflection steps in this as well. As the plan gets executed, it's not a static plan, right? You go step by step. You reflect on what has happened before, and you can change it as needed.

Third, the beauty of all of this is that you can change that plan dynamically without having to write one line of code, right? Maybe you'll get a notification tomorrow and you want to change something else in it. You can tell the agent to do that and the plan is going to change, right? So you're specifying exactly how you want that task to be done, and that's a huge value add. And finally, because now the agent is a bit more autonomous that you might just saw in demo one, we have to talk about guardrails, right? There should be system. There will be system and customer-specific guardrails. Things around, like limits on the compute time that you're using, on the data that you're allowed to use, on what you're allowed to modify, and when you're supposed to get back to the user and ask for confirmation of this. All of this is what makes it possible. So let's recap demo one. First, you've seen how the Audience Agent now helps Meera perform a task. That task is possible because of the creation of the dynamic plan and its execution. And the fact that because this task, you can continuously find those plans, then the task gets executed exactly as you wanted to. So with that, let's go to demo three. Meera, we have time for one more demo. We've helped you a lot today. - What else can we help you with? - Yeah. So the Audience Agent has helped me by providing insights where I could launch my engagement campaign. And now it's proactively monitoring my audience health for me. This has saved me a lot of hours in my day. But I do have a final challenge for the Audience Agent. So I have an upcoming campaign. In fact, it should run for the next 30 days. And it's about selling smart TVs. And I want the audience to be very precise in this nature. Can the Audience Agent create based on my business goals? I do want to mention that I have a tight budget. My reach is capped at 100k audiences, and my conversion goal has to be at least 2.2%. Is Audience Agent ready to accept this challenge? That's a really interesting and complex question, Meera. I can't wait to see what the Audience Agent is going to come up with. And just to use my mechanic analogy one last time, this is akin to using my specialized tool to go deep in the power management system of the car. So let's dig in. I'm going to ask you a question. Can you help me build a highly targeted audience of a 100,000 profiles for selling smart TVs in the next 30 days? So now the agent is going to think, and this might be a longer process of thinking, and it's coming back with an answer for me. So let's unpack this answer. First, I can see that it's telling me, "Hey, because you've mentioned highly specific audiences, I suggest that we use a propensity model to build this audience instead of reusing existing ones." Then it's confirming some assumptions that I've specified in the prompt or that it generally knows, right? It's told me that, "Hey, the audience should be 100,000 profiles. The goal that you have is selling smart TVs." And that's really important because if it misinterprets the goal and builds a propensity model for something else, it's not going to work. So those assumptions steps are really important. It's telling me that the conversion window is 30 days. And then it has a few more general assumptions that are not informed by the prompt but by how you typically work. In this case, like, "Hey, I'm going to include all geographies," right? But you can imagine that all of you have very specific rules, so the agent is going to be able to automatically infer those. And then interestingly, it's coming up with a high level plan to build a propensity model. So it's giving me four steps, right? First, I'm going to identify the signals that I need for this, then I'm going to train the model, then I'm going to score all your profiles using this model, and then finally, I'm going to create the audience for you. Now overall, this is a pretty complex and daunting task. But because the Audience Agent has broken this down into small steps for me, I feel pretty confident that I can move on. So I'm going to ask you to proceed.

So the next step, as you know, is finding the datasets. And this is going to be a challenging one. I see that it's coming back with a few datasets and signal that it has identified. So let's dig in a bit deeper here. Now the agent has done the heavy lifting, right? Churning through all the data that I have and identifying for that particular conversion goal that I have what signals it should use to train the ML model? They've taken me days if I know how to do it at all. Let's be honest. This is a pretty complex machinery. But it has identified product interaction data, things like, I visited certain pages or I searched for certain terms, purchase history, and engagement and intent insights, right? The fact that maybe I've engaged with a promotional offer. So all of this is what the agent is coming back to me for. But this is, again, a moment where we need to pause and ask our marketer, like, "Hey, Meera, are these signals that you would use for training the model?" Yeah. So the first two datasets, the product interaction data and the purchase history are something that me, my data team, and ML team often use while we are generating campaigns. But the third one, which is the engagement and intent signals is something new for me. And that sounds pretty interesting for me because it is pointing to customers who have looked at offers for smart TVs in the past few months but never bought it. So this seems very much tuned to derive that precise audience. So I'm getting really excited, and I cannot wait to see what the estimated conversion is going to be. And also, this is a game-changer because it is pointing me to datasets that we never knew existed. It's showing me these high value datasets. So thanks, Audience Agent. All right. With Meera's confirmation, I'm going to move on. So now the system is actually proceeding to run through the rest of the steps...

And I can see that it came back with an audience for me. So behind the scenes, what had happened was if you use those datasets and signals that we've identified previously, found the right ML model, trained the model on that data, reserved the part of that data to verify that the accuracy is high, scored all the profiles, and then came back with an audience. That's a lot of steps that it had to do but it was seamless for me. And now I have my final audience. And not only I have my final audience but I can see the model accuracy score. And there's a hyperlink there. I can see area under the curve and a few other things if you know ML and want to really fully understand what's going on. And it's showing me the influential signals, so I can quickly verify that the marketer intuition aligns with this. So Meera, I came back with 3.1% conversion rate. Is that something that you expected? It's even better than I expected. I was just aiming for 2.1 and thought it was good. This is excellent and beyond what I expected. I also like the fact that this used to take days or even weeks depending on when I get in the queue with my data and ML teams. Now you're telling me the agent is going to look at datasets that are interesting for my particular business goal. It is building an ML model on the fly. It's training it. And then I can interact with the agent and tell if I don't like the score that I'm seeing, it's going to go back and redo that whole process. So I really like co-ideating with Audience Agent in this manner. It's really my thinking partner now. Awesome, Meera. And we do believe this is a big deal. This is that actually aligned with what you might have seen Akash demo yesterday, and we can't wait to put this in the hands of your customers, because we believe that this is where the magic starts to happen. When you have predictive AI models that are powered by your data, but now becoming really easy to use with the help of GenAI. So the mix of two of them coming together is really what can transform your marketing programs. Let's take a few peeks under the scene. So again, what you saw in the demo was a series of steps. Where I asked the goal, I got an initial audience plan. I got an initial audience creation plan. I confirmed it. Then we found the datasets and the signals. I confirmed that as well. And then the model went on to be trained, and the audience was created. This is just one possibility. There could be multiple steps in this process where maybe I would have made a different choice or go back. This is not a static plan. This is, again, as I said before, agents and humans working together to come up with a solution to the task that they have. So for a final unpack, what you saw was fairly similar to the first demo. But now the plan is a lot more complex, and you had to include a lot more variables under the scenes, right? Like, first of all, it had to know to use an ML model instead of using existing audiences. It had to then decide what datasets to use behind the scene. Am I going to use sampling or not? What ML models am I going to use? Right? So there's a lot more complexity in this plan than what we had before. But as before, the user still has the ability to interact with it and correct it if needed. Second, the execution was a bit more complex, right? If before, it meant fetching a list of audiences and looking for significant changes, now it's a lot more about finding the datasets, training the model, verifying the accuracy, all of that. And the chances of the model-- The plan backtracking, right, and going to previous steps are much higher now. But still, the user has to validate that critical points, right? So again, it's human plus AI making these critical decisions. And finally, because now this is a complex task, the agent has shown me the data that he used, the influential factors, the model accuracy score. So I still have a lot of trust in the system. And it's also going to show me the reasoning steps if I want to in order to verify step by step. Because you can execute all those steps yourselves, right? It's just that the agent is doing a lot of the heavy lifting here. So to wrap up demo three, what you saw now was a different type of task. This one was a goal-driven task, and it was a onetime only task. And you saw how predictive ML became conversational, and why we believe that the power of your data now becomes unlocked with these predictive models becoming conversational. And with that, I'd like to hear one last time from Meera, the marketer. Yeah. Thank you, Adobe Agent. Wow. What a game changer. I got instant insights into my untapped audience and was able to launch an engagement campaign that typically used to take weeks or perhaps even days, now to just a few hours. The Audience Agent also set up proactive audience health monitoring for me, so that I don't have to spend my Monday mornings debugging. I hate that. But the real magic is co-ideating with the AI to create these precise audiences based on my business goals. And I came up looking like a winner, 3.1% versus 2.2% expected conversion rate. And that dataset engagement and intent signals, which I never thought existed, and all to do it by myself without having to be in the queue with my data and ML teams, that I call is like a wow factor for me. I cannot wait to try out other Adobe Agents and share with my teams as well. What Adobe said was indeed true. These Adobe Agents are going to be supercharging and scaling my customer experience management programs.

Okay. How do you see agents transforming your workflows? Okay. We had close to 50 people responding to this. Let's see how you would like Adobe Experience Platform transforming your workflows. Most of you, 36% of you said that you would want to enhance decision making with AI-driven insights. So that's the largest bucket, 25% of you said you see it automating repetitive tasks, improving your efficiency, 24% of you said you see it co-ideating, optimizing strategies in real time and 16% of you think that it'll help you reduce dependencies and just accelerate the speed of execution. Wonderful. Thank you all. And to wrap it up, I just wanted to highlight few key value adds for what we just presented today. So you saw at the beginning how AI Assistant was a game-changer, getting fact-based responses, delivering data insights to you with full transparency, explainability, and data provenance. Now with the AEP Agents powering that, it takes it further, being proactive, whether it's proactively showcasing optimization opportunities or performing action on your behalf or doing cross application workflows. So first, let's focus on grounded and real-time context. So what we mean by this is that deep anchoring in Adobe's marketing expertise, in customers' data and documents, in policies and procedures so that the answers are accurate. Number two, not all data has to be in Adobe Experience Platform. It's an open and flexible system. So you can let the data live where it is, and we have several optimization opportunities for you to minimize data movement and maximize efficiency. And number three, about the real-time context is the fact that you just saw in the demo, third demo, goal-based audiences that we were able to create ML models on the Fly, and that's a real game-changer. The second is it's built for enterprise scale. And all these agents work right out of the box, as I mentioned before. It is pre-trained for a long period of time, quality hardened. But we also love you to customize the agents. So there is a demo at the Adobe GenAI Pavilion where you can go and see how agents can be discovered, new agents can be registered, customized, extended, orchestrated, etcetera. So that is around extensible for enterprise scale. And we are also exploring our agents talking to third-party agents to unlock greater use cases. And the last is around trusted and responsible AI. This is at the very core, and it is of paramount important to us. So there are four aspects that I would like to draw your attention to. And number one is, basically, the data security and privacy that I just mentioned before, customer data is not shared across organizations. Your business data is used for training ML models, and it's isolated and protected. Number two is around the notion of data governance and privacy and security. All of the things that is in Adobe Experience Platform also percolates to the GenAI and agentic layers as well. Number three is around biased detection. And we have a lot of models within our ML foundation that does bias detection and drift detection and so on to ensure that, there is full transparency to how the models are being generated and used. And then lastly, number four is the fact that all of your permissions, whatever you define in Adobe Experience Platform is used by the GenAI and the agentic layer as well.

So with that, the future of customer experience orchestration is indeed here. And Adobe Digital Agents is going to amplify and add capacity to your teams and act as a talent multiplier, co-ideate with your practitioners and collaborate with them, and accelerate productivity efficiency. And so no more waiting on dependencies. What once used to take hours and perhaps days can now be accomplished in minutes. And with that, it's a wrap for the session but we are here for further Q&A. And there's plenty more sessions for you to absorb, and here are a few sessions that we are highlighting. Take the time to soak in these sessions, ask questions, and think about how Adobe's agents and GenAI innovations can drive impact to you and your business.

And please join us at the GenAI Pavilion. We're there to show you more demos, and we'd love to engage in a conversation. Tell us what's working. What do you think about this session? Where we can help you more, right? Because we're very excited about this technology, but it can only work if we're actually helping you accomplish something that's of value.

Thank you very much. We really cherish all the time that you've spent with us, and we are here for Q&A. We'll be hanging around for another 10-15 minutes. So feel free to come over and talk to us. - Thank you. - Thank you.

[Music]

In-Person On-Demand Session

Under The Hood of Adobe’s Gen AI Innovations - S653

Sign in
ON DEMAND

Closed captions in English can be accessed in the video player.

Share this page

Speakers

  • Meera Srinivasan

    Meera Srinivasan

    Sr. Director Product Management, Adobe Experience Platform & Experience Cloud AI/GenAI/Agents, Adobe

  • Horia Galatanu

    Horia Galatanu

    Director of Product Management, Adobe

Featured Products

Session Resources

Sign in to download session resources

About the Session

Get an inside look at our latest AI innovations announced at Summit, driving greater value, accuracy, and adaptability. Learn how these solutions work, with examples demonstrating how we ensure valuable responses and empower you to personalize AI technologies and innovations to meet your business needs. Whether you’re looking to enhance your marketing operations, improve customer service, or scale AI-driven workflows, you’ll get actionable insights into harnessing the full potential of AI for your business.

Key takeaways:

  • Integrate AI solutions into your workflows for greater efficiency and scalability
  • Tailor AI technologies and innovations to your specific needs with proprietary data and tools
  • Discover the architecture behind our AI-driven solutions, enabling smarter, adaptive capabilities

Industry: Commerce, Consumer Goods, Financial Services , Healthcare and Life Sciences, High Tech, Industrial Manufacturing, Media, Entertainment, and Communications, IT Professional Services, Retail, Telecommunications, Travel, Hospitality, and Dining

Technical Level: General Audience

Track: Analytics, Customer Data Management, Customer Journey Management , Unified Customer Experience, Customer Acquisition, Generative AI

Presentation Style: Thought Leadership

Audience: Campaign Manager, Digital Analyst, Digital Marketer, Marketing Executive, Audience Strategist, Data Scientist, Web Marketer, Marketing Practitioner, Marketing Analyst, Marketing Operations , Business Decision Maker, Content Manager, Data Practitioner, Email Manager, IT Professional, Marketing Technologist, Omnichannel Architect, Social Strategist, Team Leader

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.


By accessing resources linked on this page ("Session Resources"), you agree that 1. Resources are Sample Files per our Terms of Use and 2. you will use Session Resources solely as directed by the applicable speaker.

New release

Agentic AI at Adobe

Give your teams the productivity partner and always-on insights they need to deliver true personalization at scale with Adobe Experience Platform Agent Orchestrator.