[Music] [Woman] Please welcome Senior Vice President, Chief Technology Officer Digital Media, Adobe, Ely Greenfield. [Music] [Ely Greenfield] Hello, everybody. Hello. Hello. Thank you very much. I'd like to get started today with a game. So this is a game that I used to play back in the 1980s when I was in middle school on an Apple II. Okay, so AI on an Apple II, imagine that. So here's the way the game works. It's called Green Globs. You can go look it up on the internet. The way it worked was the computer would put up a bunch of green dots on the screen and ask the user, please predict where the next dot goes. So let's try this out, okay? Where on that dotted line up there do you think the next dot is going to appear? Go ahead. Point, shout it out, tell the person next to you, I see some people pointing. I need more hands, more people pointing. All right.

How many of you got it right? Bunch of hands. Congratulations! You're an AI.

No joke. That is all AI is. It's just a function in space, a function that tells you, if I give you an X, can you predict where is Y? And as long as the AI knows where that function is, it's like it can predict the future. And that seems pretty simple and straightforward, right? So you're all AI experts, you can now go get a job that's, like, you know, 10 times whatever you make now, probably, because that's what it costs, right? But it seems a little too easy, right? This is something that we were able to do ourselves in middle school, but somehow we go from this, which was invented literally in 1950s to this. Okay, so last year, we got this AI that could answer questions, right, conjure up images out of thin air, and write functioning code. And we were blown away last year by all of these amazing demos. But as you heard David say in the keynote this morning, this year, 2024, this is the year that we have to put it all to work, transforming how we're delivering customer experiences. And so that's what we're going to spend the next hour talking about. We're going to do it in three parts. But let me say this first. My title is CTO for Digital Media. So you know what that means, right? Big nerd. Okay? So that means I'm a big believer that to really understand how to use a tool well, it helps to understand a little bit about how it works. So we're going to dive deep in this session. Promise it won't be so deep that you don't understand it if you're not a big AI expert already, but we're going to start up by spending a little bit time under the hood and talk about some of the theory, and how we get from those green globs to where we are today. And we'll talk a little bit about the questions that you should be asking of any AI solution you use, whether you're getting that from us, Adobe, or from somebody else. And finally, we'll spend the bulk of the hour actually going deeper into some of the stuff we showed you this morning around AI Assistant and Firefly, and talk about how people like you are actually using it today, using our Adobe products. So let's dive under the hood, and let's kind of take that theory a little further from where we started. So I made the claim that AI is no more complicated than this, right? Let me explain. Imagine for a moment that you are the AI. You're not as smart as you were when you were pointing, but I'm going to ask you to make a prediction anyway. Just like before, I'm going to ask you where on this dotted line the green dot goes. Now to do that, what you have to do, and what probably you already did that first time, is imagine a function somewhere that predicts where those green dots appear. Now I've been very generous. I gave you an example to start with, but you don't really have much to go on. So when you first are a newborn baby AI, you're going to take a random guess. You're going to say, maybe this is the line, right? It goes through the green dots, you got that right, but other than that, you're kind of making it up from scratch. And you guess it's going to be up at that white circle. I'm sorry. You are horribly, horribly wrong, right? Yeah. Gasp. Right, AI starts off very wrong. But this is what training is. This is what we talk about when we say training 'cause I didn't just tell you that you're wrong. I was nice. I showed you what the right answer is. And so what we do when we train an AI is we look at the delta. We figure out how far off it was and we do a little bit of math, and then we adjust the line to compensate. Now we're not going to go into the math of how we adjust it, but you can kind of mentally imagine it. We figure out, okay, we've got to change the slope. We've got to tilt it down a little bit. Maybe we've got to bring it down a bit, and now we've got more data. It's pointing the other way. We think we can do a better job, and so we try again. There's another dotted line up there that asks you, where is the green dot going to go here? And this time, armed with that new data, you say, I got this, right? I got my new function all lined up. I'm going to predict it's right there. I'm sure I got it right, but nope, horribly wrong again. So you do the math. You adjust your function again. This time, you're going to tilt it up. You're going to, sort of, make it a little more horizontal. And we do this over and over and over again. And literally, when we talk about these amazing scientists who are training AIs, this is what they're doing, showing an example, doing the math to adjust the function, and repeating until eventually you get enough dots up there that you actually get a really good idea of where that function goes. Now you're a great AI, right? Maybe it isn't perfect because the real world is messy, right? That line doesn't go through all the green dots. But this is a pretty good way, and probably the best we're going to do to predict where that next green dot is going to go. I promise you that is all AI is. Finding that function, where given an input, we can predict the output. Now obviously, GenAI is applied to much more complicated use cases, right? It can create images and predict answers to questions. So how do we get from this to that? Well, we can make our AI more complicated too in ways that are kind of easy to understand. Instead of just straight lines, we can let the model curves. The function gets a little more complicated here. It's not just, you know, y=mx+b or whatever you learned in middle school. But with these more complicated functions and curves, we can now model and predict more complicated use cases. Instead of just working in 2D, we can go to 3D, and we can say, hey, given X and Y, now predict Z. Or given X and Y, predict Z and P and Q and maybe five more numbers. We can go to 5 dimensions or 10 or 10,000 dimensions, and we can create these incredibly complex curves where we give it as many numbers as input as we want and asking it to predict many, many numbers as output. And this allows us to create these complex curves that can model complex, sophisticated real world phenomena, like, what makes a real picture, what makes an answer to a question valid. Now all of this stuff, even this, kind of, big higher dimensional stuff, was actually invented decades ago, really over 70, the past 70 years, we've been inventing a bunch of this stuff. But somehow, it's only in the past few years that we've actually seen this GenAI capabilities explode. And that's really because of three, really a lot, but it boils down at the end to a set of three recent key inventions. And the first is one that I think you guys probably all know, which was compute, right? So compute today is 100 billion times cheaper than it was in 1980. 100 billion times cheaper, more available, faster. Please don't quote me on that number. I literally asked an AI, and that's what it told me. But it's something like that, right? And all of that compute actually allowed us to take this theory, which was only theory 70 years ago, and run it at massive scale. And what that computer allows us to do is then take the basic theory and actually apply the art of understanding the use case, the customer, what uses we want to apply this to, and develop these new, very complex, higher dimensional functions that we can't even visualize 'cause we don't think in 10,000 dimensions. We think in two or three. But we can still figure out how to model those and create new models that can model these real world use cases. And there's two new models. And when I say model, what I really mean is functions, like, curves that, or structure of a curve, that have unlocked GenAI over the past few years. The first of these, it's what's called a transformer model. And this is the curve or the model that lives at the heart of so called LLMs, large language models. This is what drives all of the recent generative work around text and data, and you'll see us get into how we use that a little bit later. The second is what's called diffusion models. This is a different architecture of AI that has been driving all of the media work that we do with images and video and others. It's at the heart of Firefly and a lot of other pieces out there. So we're going to dig a little bit deeper now. One more section of theory, and we're going to go from these multidimensional green dots to how we actually build the GenAI that we're all familiar with. So imagine for a moment that I want to create a language model. This language model is going to be one of the more simple ones. All I want to do is pass it the first half of a sentence and have it predict what the second half is, right? So the first thing we have to do is we have to teach our model to understand text. You know, we just got done talking about how AIs are functions, and functions operate on numbers. So somehow, we have to turn this text into numbers. Now that's actually not that hard. It's really as simple, it can be as simple as, I'm going to take every word in the English language, and I'm going to assign a random number to it. And so if I want to pass the quick brown fox jumped to an AI, I just turn that into 37492, you know, these numbers are random, but as long as I'm consistent, I can pass those to the AI and it can start to learn on them. Now in the real world, all of this stuff actually gets more complicated 'cause we're doing lots of optimization and efficiency on it. But at the high level, this is how it works. So I take this string, I turn it into numbers, and I pass it to my AI. And once we feed that in, we can ask it to spit out five more numbers, right, which represent what the next words are in this sentence. This is no different than what we were doing with those green dots. And just like those green dots, it's going to fail miserably at this because it only has one example. It's going to say, oh, yeah, quick brown fox jumped, apple, schoolhouse, purple fish, something like that, right? But we tell it the right answer over the lazy dog. We allow it to correct. It runs that same math, and we give it more examples. This time, thousands, millions, literally, trillions of examples over and over again. And by the time we're done, we can show that half a sentence. After correcting it with all of those examples, it figures out, oh, yeah, okay, over the lazy dog, which is kind of mind blowing and at the same time, exactly what we were seeing with those lines, you know, back in middle school. What's amazing about this is it's not just about finishing sentences. As you guys know, some smart person came along and took this model, this approach, and said, "What if I just feed it questions and train it on answers?" It can predict the answers to questions. "What if I feed it creative briefs and I feed it marketing copy?" Again, just turn the text into numbers. It can actually predict good marketing copy. Eventually, we found that by feeding it the whole of documented human knowledge as represented by the internet, we can actually teach it to model an unending number of amazing powerful use cases, which we are just at the beginning of figuring out. So that's how these language models work and how they connect back to that basic theory. Let's talk about images. So imagine I want to build Firefly. Firefly takes a prompt and imagines a beautiful image to match. Okay, so how are we going to build that? First, again, we have to teach our model to understand pictures by turning it into numbers. That's not hard to do. A picture is made up of pixels. Pixels are made up of colors, and colors are just numbers. So when you look at this picture, it's already a few thousand numbers that we can just pass off to our big function and, say, start predicting things from this. Now I'm going to do something a little weird here. Rather than teaching this model to imagine an image, what I'm going to do is teach it to denoise an image. So here's what that means. Well, a noisy image is a messy image. Rather than just being a perfect beautiful picture of a smiley face, it's one that I've added some junk to, right? Literally sprinkled it with random pixels, made it messy, made it, sort of, not the kind of thing you'd actually want to look at and say, "Yeah, I'm going to put that in my marketing campaign." But if I take that and I pass it to my AI and I say, given this, predict what the clean image should be, it will fail miserably, right? Hopefully, this is starting to sound boring now, which is the goal. But then I show it the clean image 'cause I had that, 'cause I actually started with that. And I do that again and again and again, and I show it hundreds of millions of images, and eventually, it can predict the clean image, right? Now again, somebody came along, some very smart person, and figured out this model can actually do something amazing. They came along and they fed it an image of completely random noise. No recognizable image there at all. And like magic, this still generates a clean image because it had been taught so well and is such an eager AI that it can still take that and somehow figure out, "Okay, I'm going to map this to a beautiful picture of a dove or whatever." That actually looks like a real photo or illustration or whatever we've trained it on, which is great. We've got what we want now, right? We've got an AI that can actually take random noise and just imagine a picture out of thin air. The problem is we can't control it, right? I wanted a smiley. I got ice cream because it just gave it random noise, and it just picked an image and said, there you go. So we have to solve that problem. So for that, we're going to train up a second AI model, and this one is going to kind of be the adult supervision in the room. It's going to learn how to tell the first AI whether it created the right image or not. So repeat the process, right? And this is kind of what the discovery process of AI is. What else can we do with this now that we've got these amazing tools? For this model, we're going to feed it two sets of numbers, one of which represents an image, and the other one is text that represents the caption. We're going to ask this model to predict just a single number, 0 to 100. How well does this caption describe this image? You know the drill, right? You know, it's going to suck at this, but we feed it hundreds of millions of image caption pairs, some of which are good, they match, some of which don't match. But every time we tell it what that number should be, and it eventually learns, yes, I can actually tell you whether this text describes this image, which is mind blowing that it understands the relationship between image and text, but it's just that repetitive process of adjusting the curve and adjusting the curve and adjusting the curve at massive scale. And so we can take these two AIs and we put them together, and now we just start with a caption and generate completely random noise. It doesn't matter what it is. We hand that noise off to the first model, and we say, "Please clean this up into a beautiful photograph." And meanwhile, we hand the caption off to the second model, and we say, "Hey, can you keep an eye on that first one?" And as he is cleaning and creating that image, please just keep nudging him in the right direction and make sure he produces an image that actually matches the caption.

That's all it is. This is how Firefly works. This is how our most generative media models work at its core. They've obviously evolved a lot beyond this, and, you know, no disrespect to my amazing team of AI scientists. They do amazing work on top of these foundational models. A lot of it is about how do we make this really operate in the real world at scale? Even with all of the compute in the world, this stuff is still really expensive. So we've got to get it to figure out how to get the theory to fit into practice. But that does bring us to where we are today. Now we've got these amazing technologies that we can put into our tools, and hopefully do it in a way that's actually going to add value. So let's get into that. We're about to stop geeking out here. Actually, that's not true. Shiv's going to geek out too. But we're going to start looking at AI Assistant and Firefly and see more than what we saw this morning about how this stuff shows up in our products. But as we do that, as you're watching, I want you to be asking yourself two questions. The first is the one I just alluded to. How will these amazing technologies actually deliver real value in my businesses today, right? Got to get started, it's 2024. And I think for most of you in the audience, the answer is this. AI helps creative people be more productive. And when I say creative people, I don't just mean your Photoshop users. I mean every skilled professional in your company who uses creativity on the job. Yes, that means the Photoshop users, but it also means the marketers who are writing queries, creating user journeys, finding customer insights because every one of those people, they are wasting their time today. Instead of putting that creativity to work all of the time, they are spending a significant amount of their time on the grind of production. And the real value of GenAI is not creativity as much as it, you know, we talk about it as these creative AIs because those models aren't really creative, not the way a human is. It's about leveraging that AI to keep your best people using their creativity instead of spending time on the rote production. So keep an eye out for these three ways we do that. The first is by accelerating the work of your most skilled people. These are people who can get the job done today, but they're overloaded 'cause there's just way too much work out there, and AI can help them do it faster and easier by leveraging their skills for the more high value work and offloading that rote production work to the AI as an assistant. The second is by enabling the people out there who have the creativity and the drive, but they don't necessarily have the technical production skills yet to be successful here. For those people, AI can just push them over that line where they can actually be enabled to do some of this work instead of being bottlenecked by that small group of highly skilled technical production people who are doing the work and that they probably can't get any of their time for. And third is by actually offloading some of the most common tasks to AI and automation altogether because many of these rote production tasks today that no human really wants to spend their time on can actually be fully handled by an AI today, right? Now I wouldn't hand it off to them completely except in some limited cases, right? This stuff is amazing, but you still want humans in the loop to do approval and quality assurance. But you can massively scale up some of these tasks and do it at an order of magnitude higher by handing some of that off to the AI today. Now productivity sounds like kind of a boring value in this face of all this amazing, you know, visuals and demos we've seen. But remember, by unlocking the skilled capacity in your workforce, your colleagues, your employees, that means way higher volume of better experiences that are more personalized and targeted, which leads to better results for both you and your customers. And that, I think, is pretty exciting. So the second question I want you to ask as we dive into the products is this. What steps is my AI partner, whether that's going to be us, Adobe, or someone else, should be us, spoiler alert, what are they taking to safeguard their AI, and therefore, safeguard me, and my company, and my brand? And in particular, you should be asking these three things. What is the provenance of their training data? What are they training on, and how did they acquire it, and what controls have they put in place around it? Because as you saw, these AIs, they learn from the training data. It's data in, data out, which means garbage in, garbage out. So you really have to be thinking about what are they training on, and how does that affect the answers I get? What guardrails have they put around the content being generated? Do they have adequate protections to prevent accidental harm and bias, or unwanted IP showing up in my content, or other content creeping in? Do they ground their answers in truth to guard against hallucination and the missteps that it can cause? And finally, and probably most importantly, can you trust some of your data? Your prompts, your responses, and any custom training you do, which is where this is going, as, you know, you saw in the keynote this morning. This is likely some of the most valuable assets you possess, your company has. So do they have the right controls and architecture and trust in place to keep your data separate, safe, and secure? So keep those two questions in mind because now we're going to dive into the details, and we're going to look a little bit deeper at AI Assistant, what you saw in the keynote this morning. And to do that, and how AI is showing up in our digital experience products, I'd like to invite my colleague who I work with every day on this stuff, Shiv, who's VP of Engineering for the digital Experience Platform up onto the stage.

[Shivakumar Vaithyanathan] Thank you, Ely. I've heard him a few times. Every time I listen to him, I learn something else. So let me briefly start by saying, when I first did, when I started my grad school, I remember working with neural networks, and they were still in their infancy. But look where we are now. Models with billions of parameters in the hands of everybody, creating images, in some cases, even being a therapist. But that is only part of the story. The entire large language models is only part of the story. The other part is, how do we make these large language models work in an enterprise setting? And that requires the context of enterprise data, and that's what I'm going to talk to you about today. You, our customers, have already entrusted us with your data. Now it's our responsibility to give you more value out of that same data, and we will do that by marrying large language models with this data. And this marriage is accomplished through the AI Assistant. This morning, we announced the AI Assistant, which is a natural language interface to your data where you can ask questions in the context of the product. You can get answers with verified correctness, and you will continue to get more out of it as you work with it more. And the reason I can say that with relative confidence is because we have seen that in the past. Any new interface, when you start using it and as you continue to use it, trains you a little bit. Search engines. You started working with search engines, and I know every one of you had this experience where you type in a search query, and then you don't get back what you want, and you change the search query. The next time around, when you go ahead and type in a search query, you're slightly smarter. The same thing happened with smartphones. And similarly, in AI Assistant, over a period of time, the enterprise user will continue to get more and more out of it.

Let me first start by showing you what powers the inside of an AI Assistant.

It's easy now to fall into the trap and say we have large language models which have billions of parameters. We have enterprise data. So maybe these large language models are simply a small layer or a very thin layer on top of the large language models, and AI Assistant being just a thin layer.

Let me ask you to think a little bit for a moment. If I told you that the human brain only needs to know language and nothing else, imagine what your conversations with people would be like. The only things you would know is language, no reasoning power, none of the other things that is needed to make the brain work, the rest of the processing power. Similarly, large language models are a piece in the larger puzzle. And the engine that powers the AI Assistant is what we refer to as the Generative Experience Models. It's a collection of models, and this collection of models together powers the AI Assistant.

It's broadly broken down into base models, and the base models are models that Adobe trains on Adobe proprietary data and knowledge. And then there are custom models, which are models for each and every individual customer to make sure that your idiosyncrasies in the data are captured appropriately. And finally, there's a decision support layer and a decision services layer, which is responsible for making decisions. And later in the talk, I will walk you through an example in which I will connect it. And this decision services layer is the one that also orchestrates the final responses.

The base and custom models themselves are broken down further into structural and semantic. I was asked to be a little geeky in this conversation. So some of these words, again, I will connect back in the example. Structural models are those that capture the linguistic structure of the way in which the prompts are expressed. And there is a different structure that we capture with the data, which are proprietary indices that we hold that allows us to be able to do certain matching. The semantic models are models that are propensity models, which again I will connect to the example, and other models like recommendation models, which are statistical in nature, which capture the semantics at aggregate. And the interplay of all these models is finally gives the real power behind the AI Assistant.

So the next question is why this elaborate decomposition of a larger engine, and why even bother to do all of this to power the AI Assistant? There are three major reasons. You see all three on the screen, data safety, correctness, and transparency. Data safety because we want to keep all data separate from each other. Every single customer's data is separate. Adobe data is kept separate so that there is no commingling. There is no data leakage. Second, correctness. The decomposition into all of those finer models that I mentioned within the base and the custom models allows us to be able to do a lot of tweaking to ensure that the correctness is at the level that it needs to be. And finally, the transparency, again, because of all of the decomposition, we hold the entire provenance inside for every operation that we do. And that provenance is precisely what we can give back to you as transparency and verifiability of the results that get shown to you. So up till now, we've been a little abstract, slightly professorial, and I'm going to now walk through an example. Our Lead Product Manager, Rachel Hanessian, Product Manager for GenAI, will join me on stage. We will now do a little bit of a show where we'll go through an example, and then we will try to connect the back end that I just showed you. Rachel? [Rachel Hanessian] Thanks, Shiv. We've been engaging customers in a private preview of AI Assistant, and they've seen the potential that this can bring. Nothing makes me happier as a product manager than seeing the products we build solve real problems for our customers. I'm excited to show you an example of AI Assistant, which is inspired by how our early adopter customers are using it today. Imagine now for a moment that I'm not Rachel, the product manager, but Rachel, the marketer, and I work for a retail department store. I'm going to show how AI Assistant can dramatically increase your productivity and decrease the number of requests you need to make to your data analyst. Now I'm working on a campaign to encourage shoppers who have purchased once to purchase a second time. I'm hoping that this will help them build a habit with the brand. To start, I'll need to create an audience. I'll use AI Assistant. In the past, I would have to go through documentation resources to figure this out. I won't have to do that today. I also don't need to bother anyone on my team to try to figure out what behavioral events to use. Instead, AI Assistant can help me be self-served. I'll pop it out so everyone can see, and I'll start by asking the AI Assistant, how can I build an audience of shoppers who have recently made only a first purchase and who have returned to my website since then? What behavioral events can I use? This is pretty cool, right? AI Assistant gives me just the information I need to build an audience. It also gives me a helpful video tutorial in case I want to learn more. AI Assistant also tells me that based on my data, there are some specific behavioral events and metrics that I could use in my audience. It tells me the has purchased event, the page viewed event, and even the time spent on page metric would be useful for this specific use case. Without AI Assistant, I certainly would have had to connect with my data analyst to get to this detail of information. It would have taken me hours easily instead of the seconds that you just saw. Another thing I love about AI Assistant is just how transparent it is. It helps me really trust the answers. If I select Sources, I can see the exact resource documents that this information came from. And scrolling up, I see that there's inline citation so that I can learn more about where those specific sentences came from. Additionally, if I select this "i" icon, AI Assistant tells me exactly how my question was interpreted, including how my term recently was quantified. There are no misunderstandings here. I can even see the step by step breakdown of how AI Assistant grounded the answer in my specific data.

These behavioral signals are an incredible starting point. I mean, there were thousands of events that I could have used for this audience, and AI Assistant helped me identify just the most valuable ones for my particular use case. Scrolling down, I do see that there are some suggestions here and I'm interested to look into these next. But first, I'll hand it off to Shiv to explain what's going on behind the scenes to power the experience that you just saw.

So the first thing was obviously visible was the fact that it was an easy interface for Rachel to work with the assistant. Simply provide a prompt. And once the prompt is provided, the response comes back from the assistant. But let's go a little bit more into detail as to how that got done, and we connect it back to the generative experience models that we saw earlier, okay? There are two questions in the prompt. There was one question which Rachel asked, which to say, how do I build an audience? And the second question was, what behavioral events should I use to build the audience? Those are very different sounding questions. And the response had three parts to it that she walked through. One, which was a response to the first part of the prompt, second, which was a response to the second part of the prompt, and the third was an explanation on what exactly was being shown on the screen. So the first part was synthesized from multiple documents and the response given back. We are able to do that because the Adobe based models have been fine-tuned on Adobe data and Adobe proprietary knowledge. So it speaks Experience Platform semantics. The second response is a little bit more involved. First, the machine has to identify what the appropriate clauses are in the actual prompt. Then it has to know amongst, as he mentioned, thousands of possible behavioral events which ones to match against and get the response back. That matching is the interplay that I mentioned between the proprietary indices that we build for each and every individual customer with their schema and the actual Adobe linguistic structural models that's able to do the appropriate linguistic extraction from the prompt itself. The last part, which is the transparency that we referred to earlier on, there are two aspects to it here. One, from the collection of documents, remember that an answer was synthesized. So we need to now provide back to the user individual citations for where all of the answer was retrieved from so that you don't have to go search and hunt and find, number one. And then the second part, which is equally important is, there was a structural match done, there was a match done with the underlying schema. We need to explain to you exactly what we did so that, again, you don't go off getting confused. Back to Rachel, will continue. Thanks, Shiv. A moment ago, you saw how I, as Rachel the marketer, asked the AI Assistant a question and it provided me with data and information so that I could build an audience. Now I'll show how AI Assistant can be proactive in giving me tips to do my job more efficiently and effectively. Now before I move ahead to create that audience to target shoppers who are ripe for a second purchase, I noticed that AI Assistant is suggesting that I should look at existing audiences that use purchase, pageview, and time on page behavioral events. Now I know AI Assistant is aware of all of my data and is aware of everything that's going on within my organization and Experience Platform, so it's able to alert me that similar audiences might already exist. I'll take this suggestion.

This is fantastic. AI Assistant is able to quickly find a list of similar audiences for me. This is honestly a game changer. And looking a bit closely, I do notice that this first one, which is a pretty high match for what I need, was actually created by me last year. I had completely forgotten about this audience. I guess between Assistant and me, it's good that one of us has a good memory, right? I'll need to use this for my campaign, but I'd like to start a little small and this audience size is a bit too big. I'll see if AI Assistant can help me filter it down based on propensity. I'll ask, show me the size of the first audience broken down by propensity.

Amazing. AI Assistant presents me with a clear visual of how layering on propensity would impact the audience size. I mean, it was able to identify subsegments in seconds. It's pretty much like having an on screen data scientist. Scrolling down, I notice also that there are some suggested actions from AI Assistant. I'll select this one to build the audience using the high filter.

Perfect. AI Assistant created an audience for me. That was pretty easy. And I can see exactly how my original audience was changed by adding a propensity attribute. This level of detail helps me understand and really trust the answers so that I can act on it. Before I go further, I'll hand it back to Shiv to explain how to quote one of our early adopters, AI Assistant is able to provide this kind of shortcut for my life. Okay. This was a photograph that we took when we were starting to build this, where Rachel at some point was confused. She said, "How should I move further?" And she was trying to figure out what to do. And then because the AI Assistant for the particular sandbox that we were using has access to all of the enterprise data, it can do the underlying semantic matching across audiences, and therefore, able to proactively suggest the audiences. And this is what she looked like afterwards. Two things. One, I promised earlier on that I will point out as to where the decision services gets used. Among the underlying possible audiences and the matches that can be done, the decision services decides which ones to actually surface as part of the suggestion, and that's one of the objectives of the decision services layer. The other point I'd like to emphasize is that Rachel's example is just an example, which means that every single one of you who have your own audiences, this machine, we didn't build a machine for one example. So for all of your audiences, we need to be able to do the same thing. And not just audiences, indeed, going forward, as you heard in the morning today, we will be doing this for journeys also.

The combinatorics of what makes up audiences and what makes up journeys, the question that we should be asking ourselves is, can the AI Assistant answer a very, very large number of questions? And the answer is a yes. And part of the reason for the open-ended nature of the AI Assistant is to enable people to be able to go ahead and express very large number of such questions. Let's continue with Rachel. Thanks, Shiv. It's incredible that the AI Assistant can answer basically an infinite number of questions because trust me, I have at least 99 problems to solve and that that you just saw, that was just one of them. At the end of the last example, I showed how AI Assistant was able to help me tailor my audience based on propensities to get me ready to run my campaign. Now I'll show how AI Assistant can be a thought partner in helping me decide on tactics for personalization of my customer journeys. Now I need to personalize my strategy here based on what my shoppers have actually purchased. I'll ask AI Assistant within my defined audience, what is the distribution of product category interest? Okay, so it looks like beauty, accessories, and shoes make up most of the product category interest. I think I'll start there. Now I know that the second purchase is likely to happen at a specific time after the first purchase and that'll differ by product category, but I'm not sure exactly what that time interval is. I wonder if this is something that the AI Assistant can help with. I'll ask, for the three top product categories, what is the average time gap between first and second purchase? I mean, AI Assistant really doesn't disappoint. Based on historical purchase patterns of my customer base, AI Assistant was able to compute the average times between first and second purchase for each product category. I mean, this is unlocking completely new possibilities for me. Now I not only know who to target but exactly when to target them. I can use Adobe Journey Optimizer to send them a message at the optimal time. I'll hand it back to Shiv now to unpack what you just saw. So I saw some of the heads shaking, and I sincerely hope you're enjoying this at least half as much as we enjoyed putting it together. So let's look at what Rachel did right now.

She asked a question first, and the response from AI Assistant triggered another idea in her head, and then she went off and asked another question based on which she got what she wanted, okay? The first one, she asked for a distribution of interest over product categories. And the second one, when she saw the response, which you did, the distribution tapered off and she said, the three top ones where the probability mass was, she said, "I want that. I want that, and I want to go find out for them what the average time is between the first and the second purchases." If we had to do that today, the way in which it would probably happen is somebody like Rachel takes this question, goes off to a data architect, data engineer, analyst, not sure who, and then communicate the question to them. They understand it. They go off with their backlog, hours, days, weeks, come back with a response. And once the response comes back, then Rachel will look at the answers in exactly the way she did, and then say, "Oh, I have my second idea." She might have even forgotten by that time, but assuming she remembers, she would then give back the question to the same person or some other person. They would go back. They would find the answers, come back, and Rachel would have gotten to where she got to. Now two ideas, possibly hours, maybe more, came down to a really short period of time. And in our early adopters, the work we did with our early adopters, we actually noticed precisely this, which is questions of this sort and more where the amount of time that would have taken originally in the workflow dramatically dropped from hours to minutes.

So I want to take a moment, thank everybody for having sat through our entire show. Thank you very much. Thank you. [Music] Good job. Well done.

Oh, we're not done yet, everybody. We're not done. We got a lot to go. If you've go to run, we appreciate it. But so, okay, thank you very much, Shiv. Thank you much, Rachel, for that view into what is going on with AI Assistant and how it works under the hood. On the other side of the house, we've got Adobe Firefly for creating amazing content, right? And we launched Firefly at Summit last year. You all have created, as you saw, over 6.5 billion assets with Firefly, which has been amazing. Now this morning, you saw we announced a number of new features rolling out that allow you to customize Firefly to produce specific images and styles in your brand for your marketing campaigns. And so we've been developing these capabilities over the past few months with some key partners and early adopters. So what we decided to do here is we thought we'd bring up an actual customer who's been using custom models in Firefly as part of that pilot program with us to share a little bit about their experiences working with Firefly customization. So I'd like to introduce today JJ Camara from Tapestry up on stage.

[Music] Beautiful.

[JJ Camara] Hi. JJ, thank you for joining me up on stage. Really great to have you here. Thanks. Thanks for inviting me. So you and I spent some time together a few months ago when I was in New York recently, talking about, sort of, your experience in the program so very thrilled that you were able to be with us today. Thanks. So why don't you just start by sharing with us a little bit about who Tapestry is and what your team does? Sure. So my name is JJ. I'm the Senior Director of Digital Product Creation at Tapestry serving our house of brands, Coach, Kate Spade and Stuart Weitzman.

I lead a team of 3D and 3D Print Prototype creators that support design and product development, creating assets to support concept through commercialization. We also serve as digital advocators, discovering new ways of working, doing pilots and POCs. We did a pilot with Adobe Firefly over the summer, before its release, and other pilots. And we also working for product development, focus on concept commercialization. So you mentioned, you guys are responsible for creating assets. So can you tell me a little bit more about the kinds of assets you're creating and who those go to, what they get used for? Yeah. Sure. So we focus on our products on, design and product development. So working really closely with those two groups, we have assets that are amazing and are twins of our actual products. So although our focus is on concept commercialization, they also are perfect for use cases for customer-facing opportunities. So you say examples of our products. Can you give us, you know, some examples of what those are? Yeah. Sure. So again, we're Tapestry, so we support Coach, Kate Spade, and Stuart Weitzman. On the screen, these are, 3D models visualized in substance. We do every product category from accessories to footwear to ready-to-wear and everything in between. So you've got three brands in Tapestry, and I know how the world of fashion works. That sounds like that's a lot of content that you guys have to make. Yeah. We have a lot of fun and we have a lot of content. So give me an example of a type of content that you or a product maybe that you have to create content for and what that content process looks like. Yep. I happen to have a slide for that. Sure. Handy, we have these here. I know. So again, we, currently our focus is on design and product development, and, again, we make these amazing Digital Twins. And because of that, these customer-facing use cases have come up, and these are all of our cross-functional teams that are asking for our assets. Everything from global merchandising to strategy, creating TikToks and other social media, to data science who want to marry data with visuals, to sustainability, which is just part of their circular workflow. So if I was a Coach customer, let's say, and I wanted to buy-- Wait. You're not a Coach customer. - I am absolutely a customer. - We're going shopping after this. Yeah. Yeah. No. - I saw the store out in the lobby. - Yeah. - All of our stores here. - We got a plan. You all can come with us. You got to pay for yourself, though. So let's-- When we go out to the store and we do our shopping after this. Yep. Give me example of maybe like a handbag or something that I might create-- I happen to have a slide for that too. Oh, look at that. Isn't that handy? So this is one of our icon styles, the Tabby Shoulder Bag. This also is a 3D model, visualized in Adobe Substance. So what we focus on is really the style, the shape, the details to develop the product and then commercialize it, create the sample, and then eventually, the retail product in store. But what we need to do is create assets at what we call in the industry the SKU level, so that's every material, every color, every hardware, all of the Coach codes. Sorry. What is a Coach code? I'm glad you asked. So the Coach codes are what make a Coach bag a Coach bag. So not only the craftsmanship and the leather, but also all of the details, the hardware, the materials. It's not a Coach bag without the hang tag, without the tag. So basically, you can recognize a Coach bag from 10 feet away and know it's a Coach product. Got it. So those are some of the most important brand details that have to make it into both the physical product and some of the digital assets that your team is creating. Absolutely. And the detail, it's all in the detail. Right. So you have to take this one bag and create assets for all of these different SKUs. You have to do this for multiple seasons, multiple product lines-- Multiple brands. A lot of content I assume, right? Yeah, a lot of content. More than we can keep up with today. Which I'm guessing is something that a lot of people in the-- Anyone else? Yeah, can sympathize with. So okay, so you joined us in the Firefly pilot originally, and then the custom model pilot so-- Yeah. Share with us a little bit about what you were looking for when you started to look at bringing GenAI into the workflow. Yeah. So as soon as I got my hands on Firefly, I took my dog, Mimi, I made her into a superhero. Sorry. This is a Coach dog or-- She's missing the Coach codes. Yeah, she needs the hang tag right there on the collar. That'd be great. But I think I hope everyone can sympathize with me that the first thing you did was make really fun stuff. Puppies, kids, everything. But then the challenge was we had lots of fun, but how do we take it from a toy to a tool? So before we brought you into the custom model pilot, did you try making Coach products in Firefly? Yes, we did. And how did that work out? So we started with this prompt, Tabby handbag made of shearling fluffy material. And this is in Firefly 2. So that's a beautiful handbag, but now that I'm an expert, I'm going to say-- - What's it missing? - I don't see any codes on that. You don't see the hang tag, for sure. - Yep. - Right, the logo isn't on the front. - I think the hang tag you mentioned-- - Yeah. Sorry. I don't know if those are all of them, but-- Okay, so this sounds like-- Adorable bag. Yeah. Not a Coach bag. Maybe you guys would make this, but you need to add the hang tag. Yeah. Okay, so then we got you into the custom model. - Yes. - Alpha into the pilot. So what was that process like creating custom models as part of the Firefly? Well, you guys made it so easy for us. We shared about and David hit upon it, in the keynote today. We shared about 12 to 15 assets, digital assets, visuals, and then we did the Excel spreadsheet with all of the prompting that tied back to our codes. So it all-- Sorry, I want to repeat that for the audience. Yeah. It only took about 12 to 15 assets? Yeah. Okay, you didn't need thousands of pictures of a Tabby bag? Nope. - No. - That's amazing. That's amazing. - This is like a oh and ah moment. - Yeah. - Okay. - Oh, no. We'll wait. And how did it work out? Actually, let me ask you another question. You put those assets in, how long does it take to train up a custom model? So when we first got started, the Adobe team really facilitated it for us. And we would give them all of the assets, they would do it behind the scenes, and then we would review them. Last week, we started, hands to keyboard and getting involved ourselves. But our, I mean, almost immediately, we had pretty amazing results that got us really excited, and I'd love to share. - Sure. Yeah. Go ahead. - Yeah. So we're going to go back to the prompt, Tabby handbag made of shearling fluffy material, and these are the bags. Yeah. I see codes. You see the codes, our sig c hardware, our hang tag. Very cool. So this looks like it's the shape of the Tabby, right, this feels like it's actually-- It's shoulder bag. Yep. It has multiple handles. Awesome. Yeah. What's really exciting about this is what Firefly didn't know when we developed this is we actually already, ideated, designed, and commercialized this product. It's in stores right now. So what's exciting about this is when you see the shot of comparing the two. That's pretty good. Yeah. That's pretty amazing. - Now it's not exactly the same. - No. So tell me about the workflows that you guys are looking to, you know, based on where you are now and the process you had in the pilot, how do you see this working into some of your product workflows? Yeah. I mean, immediately, you can recognize that it's getting really close really fast. So it's not driving our design, but it's our copilot and getting us there so much faster, giving us room for more creativity. So maybe had we started with the Adobe Firefly custom model, we could have gotten here and then refined it to the bag you see commercialized today. So you could get here, it sounds like you're talking about, sort of, ideation type workflow. Exactly. And then this one, I think this is interesting because you took, you trained up on a different product and then asked it to adapt. So you didn't actually train Firefly to make the one on the left, but you were able to, sort of, use it to ideate your way towards something that, you know, secretly you guys had already created in physical form. Yep. So when we first started, we cast the net pretty broad with those 12 to 15 assets, and they were really hitting all of our codes. But we quickly realized that if we focused in on a style, and, of course, we used our iconic Tabby, the results were amazing. And that's what we're sharing today, how fast we got to a really, amazing product or a faster starting point. So it sounds like you're saying one of the places where you see taking this is to take the models you're training and put those into ideation workflows with your designers. Yes. How do your designers feel about using AI? What's that process going to look like as you get them up to speed? Yeah. So we're working on socializing and rolling it out. Right now we have a bunch of use cases, specific design groups that are really excited about it. We are in early days of this. We just, you know, my team just started using, the user interface of Firefly custom models last week. So we're not quite ready to roll it out to the teams, but we're super excited to do it. And we just, we can't wait to partner with them to prove out our use cases that we've already recognized, but also discover new use cases, you know, based on their needs as well. So it's really exciting and promising and-- What kind of ideation have you done so far just with the models you've-- Yeah. So shout out to Brandon Keeney, my senior manager of our team. He was hand to keyboard last week, and he created these variations of, this is the Tabby quilted. And the upper left-hand corner, that's chocolate. That's a chocolate handbag. But then he did these really commercializable, marketable, fun images on his own. So this will be part of what we'll share with the design and creative teams, to tell them, like, all you know what we've discovered in the alpha so far. So just to be clear, these are not actual physical products you've made. No. This is-- So don't look for these in the store. You can't go buy this in store. Chocolate Tabby sounds fantastic. - Yeah. - They're not out yet. And these are not 3D models. This is the custom model you trained up that then Brandon sat and just kind of ideated. What else could we do with this that would be fun? - Is that-- - Exactly. So this is, we had several weeks working with the Adobe team, and that's where we realized, "Oh, if we really focus on a bag and a style, we can really move the needle fast." And this is how, I mean, we did the shearling, and a week later, Brandon did this. It's incredible. This is a oh and ah moment. I mean, we felt that way. We're like-- So, JJ, really looking forward to the chocolate line, the entire Coach chocolate line, and thank you so much for being part of the pilot. It's been a lot of fun working with your team. We got a lot more coming, so looking forward to more so-- Yeah. Yep. Well, thank you for joining us on stage today. Thank you so much. It was so much fun. More to come. [Music] All right. Well, thank you guys for staying with us. We're in the home stretch now. You've seen some of the ways that customers are leveraging Firefly and the way AI Assistant is showing up already in the product. For our final section, we're going to take a look forward and give a little glimpse into some of the features that'll be coming to AI Assistant and to Experience Cloud in the near future. So for that, I'd like to please welcome Jennifer Biester. [Jennifer Biester] Thank you, Ely. And I'd like to thank IBM for allowing us to use their brand to bring today's vision demo to life. All of the data shown in today's presentation is fictitious to protect customer privacy. Taking a campaign from ideation through building, delivery, and measurement has historically been a monumental task. But now with GenAI woven into the Adobe Experience Cloud, marketers are empowered to streamline and optimize the entire process. It's no longer a question of if organizations will utilize GenAI, but when and how. IBM is launching their new Trust What You Create Campaign here in Las Vegas, highlighting some fishy looking fish being replaced by a hero fish we can trust. Using GenAI at every phase of the process, let's see how cross-functional teams can leverage experience Cloud technology to move through easy-to-use interfaces to create impactful customer experience at scale. Let's dive in. Here in Adobe Workfront, I can use GenAI to help expedite the campaign creation process. I'm curious if there's any insights we learned from last year's campaign I can use to influence this year's. So I ask if we have any key learnings. Almost instantly, I have all the pertinent information in front of me. I can see which audiences were most engaged and what content and journeys help to drive that engagement. Using GenAI to do the legwork for me allows me to focus on strategy and innovation, and now I have all of the information I need to create a data-driven campaign. Once I'm ready with a campaign brief, I can easily drag and drop it into AI Assistant. And this isn't just a means of ingestion. As you can see, AI Assistant identifies and aligns key fields of data in the campaign brief, mapping them into Adobe Workfront, and then creating contextually appropriate suggestions and even outlining next steps. I can import all my matches and then click in to see all of my campaign information. Here, I can see some GenAI suggested assets, target personas, which are tied to audiences in Real-Time Customer Data Platform, and even campaign requirements. There has been a lot of interest in IBM's new offering from IT leaders and the C-Suite, so that is a perfect demographic for this new campaign. Now that I have all of my campaign information, GenAI can be used to generate the next tasks that my team needs to accomplish in order to execute this process. I can see next steps include things like creation of concept art, channel-specific messaging, and creation of customer journeys. Leadership, of course, is going to want to see a specific ROI on this campaign. So to get started, I prompt AI Assistant to help create an optimized budget mark, excuse me, optimized marketing spend, and it generates results in Adobe Mix Modeler. I can easily click in, edit my plan if necessary, and see the influential business factors driving the prediction of what budget I should spend per channel. Now that I have a plan in full focus, it's time to actually start executing the vision. Earlier today, you were introduced to the exciting new innovations in Adobe GenStudio, and I am thrilled to give you a deeper look. Here I have a holistic view of my content supply chain from end-to-end, and I can use GenAI to accelerate my campaign. I have all of my campaign information from Workfront, and I can see that my design team has created some assets for me in Photoshop using Adobe Firefly, and then have then saved them into the Content hub functionality here in GenStudio. For this campaign, we're creating an entire ecosystem of GenAI created fish to be highlighted across channels, including digital signage here in Las Vegas. I can see my campaign matches my branding guidelines perfectly. And as a channel marketer, I actually want to create a lot of variations of this content to be utilized for a lot of different aspects of my campaign. GenStudio empowers me to do just that. Here I can see the context of my campaign, like brands, my chosen personas, products, and, of course, my channels, and I'm actually going to generate some content specifically for my target personas. To do that, I select some brand-approved assets from the Content hub functionality in Adobe GenStudio. I can filter by Campaign, and automatically I see assets that have been recommended by the AI for me that specifically fit my target personas. I love that these are proactively surfaced for me, so I'm going to select these two images and a video. I also want to keep this campaign fresh and generate some new imagery. To do that, I'll utilize Adobe Firefly and I'll add a prompt to generate a new version of our hero fish to the custom models that I've created. I've trained this custom model to generate our hero fish in different angles and series, and there is nothing fishy about these results. I really love this guy in the top left, so I will click and Add him to my media. I simply fill out the rest of the form like adding in some copy prompt, and when I create variations, I'm not just creating different versions of my content for multiple channels but multiple personas as well. All of my images are specific to my organization, and any copy that you see is specific to my IT leaders and C-Suite, thanks to brand service. This is true personalization at scale in a matter of minutes, and these assets are now ready to be sent off for approval.

Now that I have all of my assets, I want to create some journeys that allow me to go through and address people with a high propensity to convert. So to do that, I'll prompt AI Assistant to create a journey for me that addresses my target personas. AI Assistant knows exactly what I need, and this is where Adobe Journey Optimizer comes into play. This is so powerful. AI creates the journey for me and the decisioning behind which journey each person should be entered into based off of their unique profile. Personalizing your outreach has never been this easy. Once my journey is created, I can click into edit the content. I see AI Assistant has already helped out by providing the target persona specific content we created in Adobe GenStudio here in Adobe Journey Optimizer. I think these images are perfect, so I accept them, and I can turn my attention to testing and optimization. I can use content generation with variant testing to ensure that my copy is always going to be relevant for my tone and target personas, all without having to write this copy from scratch. This helpful boost from generative AI enables me to reach more audiences with hyper-personalized content to help increase engagement and conversion. Now that my contents ready, I actually want to go through and simulate my journey. So of course, I head into AI Assistant and ask it to simulate my journey for me. I can add in additional information. Here we want to focus on Click Through Rate, and automatically I see all of my journey variations for my audiences right in front of me. This is almost as good as having a crystal ball. Next on my to-do list is a campaign landing page. Here in Adobe Experience Manager, I can quickly spin up a site utilizing a previously created template in document based authoring. I have easy access to all of my assets that we saw in Adobe GenStudio with asset selector, and I can personalize the experience further by generating variations of my content. I can simply fill out my inputs here in a customizable form for things like my content insights, my intent, my target audience, of course, and voila. All of this content is created for me in just a few clicks. Adobe knows that marketers sit at the helm of content creation, so we want to empower you with GenAI tools to assist you at every step of the way while you maintain control. All that's left for me to do is go through and schedule my site to go live at the appropriate time and I am done. Well, almost. After my website goes live, as well as my campaign, I'm going to want to keep my fingers on the pulse of its performance. Adobe GenStudio gives me a window into insights for all of my marketing experiences and assets. I'm curious, and so I want to ask AI Assistant what my best performing experience is in this campaign. And once I ask, data starts flowing in. As a marketer, having data visualizations at the tip of my fingers for things like asset insights and even channel performance is game changing. I can make data-driven decisions with confidence without leaving the context of my preferred application. What used to take weeks or even months can now be accomplished in hours and days, thanks to GenAI being placed in the hands of marketers. Adobe has the ability to help you achieve new heights of creativity, scale, and efficiency. Adobe is headed into the future, and we want you to come with us. Thank you.

All right. Thank you very much, Jennifer, and thank you to all of you for sticking through us to the end there. Hopefully, you got a sense of how this stuff works under the covers and how it's going to be able to power your digital experiences in 2024. Enjoy the rest of Summit. Thank you guys very much.

[Music]

Strategy keynote

Embrace Generative AI to Transform Experiences - SK1

Closed captions in English and Japanese can be accessed in the video player.

Share this page

Sign in  to add to your favorites

SPEAKERS

Featured Products

Session Resources

Sign in to download session resources

ABOUT THE SESSION

Just like that, the world has changed. Join us to learn about the transformative potential of generative AI for scaling personalized customer experiences. Make generative AI work for you to unlock creativity and increase customer engagement for your entire organization. 84% of AI decision-makers said their executives are ready to adopt generative AI. Is your organization ready? Ely Greenfield, CTO of Adobe Digital Media will help you separate hype from reality and show you how to make the most of generative AI—and avoid the pitfalls.

Track: Developers, Content Management, Content Supply Chain, Generative AI

Presentation Style: Thought leadership

Audience Type: Advertiser, Digital marketer, IT executive, Marketing executive, Audience strategist, Data scientist, Marketing operations , Business decision maker, Data practitioner, Marketing technologist

Technical Level: General audience

Industry Focus: Consumer goods, High tech, IT professional services, Retail

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.


By accessing resources linked on this page ("Session Resources"), you agree that 1. Resources are Sample Files per our Terms of Use and 2. you will use Session Resources solely as directed by the applicable speaker.

ADOBE GENSTUDIO

Meet Adobe GenStudio, a generative AI-first product to unite and accelerate your content supply chain.