[Music] [Emily McReynolds] Hi, everyone. [Bridget Esposito] Whoa. Wow. Wow. Well, maybe we needed that for this time. I don't know. Did we all need to wake up at 4pm? You're welcome. You're welcome. I used to be at a university, and one of my claims to fame was lecturing in an 800-person hall with no mic. So I always have to adjust that you all can hear me. Hi. I'm Emily McReynolds. I'm the Head of Global AI Strategy for Adobe specifically focused on enterprise. And what that means is I talk about how Adobe builds AI. And most recently, with my colleague, Sarah Alissa, who's sitting up here in the front row, we published a paper on how organizations can adopt AI. So I'm really excited to be with you all here today. Thank you for making it out. Last session of the second day. And I'm also really excited to have Bridget here with me. One of the things we'd love to do at Summit is make sure you don't just hear from us at Adobe, but hear from your peers and your colleagues on how they're doing it. So, Bridget, would you introduce yourself for us? Yeah. Thanks. So I'm Bridget Esposito. I'm Head of Brand Creative at Prudential. And so what that means is that I have a team that handles really the higher funnel that comes into Prudential. So we do everything from sponsorships events, which is we just recently sponsored the Oscars, if you got to see any of that content in that work. Rose Bowl and live, we do a lot of live events. I love that the Coke CEO is talking about how live events are a thing. That's amazing. Everything from that to social media in those spaces. And so what we've been really trying to do is surprise and delight with the brand. And so we've been spending a lot of time of how AI works into that and how to make sure we're doing it responsibly. Really important. Also good tag for the title of our talk today. So we want to take you through a few things. There's a paper. Nobody wants to be read a paper. So we're going to talk about some research we did with your peers out in the market. But first, we're going to start with some considerations for adopting generative AI. Not because you don't already know these, but it's sometime helpful to hear the challenges other customers are having and how they're addressing them, and also think about what the answers are you might need. And then we're going to go through a research survey we did that had some really interesting results, even some things that surprised me, which I always love when doing research. We have a four-part framework. There's a great paper. We have a QR code, but I think it's helpful to see it in a deck, and before we expect that way you know what you're investing in when you sit down to read 12 pages. And then we'll talk about there are so many papers out there that are like, these are the three challenges. These are the five ways to do it. But they don't have the super practical perspective. What questions should I be asking my vendors? What should be in my employee guidance to make sure people are going to use the technology responsibly in my company? So that's our plan for the next 50-ish minutes. We'll try and leave some time for questions. And with that, we'll get started.
So when I joined Adobe a year ago, there was a slide that looked like this and it had some of these questions on it. And as I went through my year, my first year, hearing from customers what they were concerned about, I think that's the thing that surprises people the most is it's not just regulated industries with financial services and insurance, but it's also retail and media. A lot of people are facing the same considerations when they go to try and implement generative AI. How many of you have generative AI in some form in your organization? That's amazing. I love that. And often when we talk about responsibility, we're getting the people who've been successful. How many of you were able to implement generative AI in your organization in under one year? I love that, right? Because it's not easy. So these questions and this work is designed to help you even if you've already rolled it out.
But what we see a lot of, what's going to happen with our brand? How are we going to have control? You can train something specifically, but there can still be concerns.
What are we going to do about data privacy? How many of you heard, "Oh, my goodness, don't enter customer information into prompts"? Probably everybody, right? Yeah. It's not just our FSI friends.
And who wants their data and their prompts to be used for training in other people's machine learning? Yeah. No one. How is our data going to be used? One of the reasons I love working at Adobe is because that's a really easy answer. We don't train on customer data.
And sometimes, I often will say, having come from a technical background, your data might not be the best to train on anyway. So there are good customer reasons for that. We don't want to do that with our customers. It's a really important point I think. And it's one of our vendor questions. So, lastly, there's some unique things when we're talking about generative AI. We've all seen the headlines of things going wrong, whether they're racist or biased. But when you're thinking about your brand, it's really important that that not happen. That nobody wants to be the first one on the front page of some newspaper when they aren't paying for the advertising.
So we did a survey because some people have asked me how long did this work take. With my colleague, the wonderful Sarah Alissa, who is sitting up in the front row, it took about six months to do this research in the paper, but I would say it probably took my 10 years of experience in the space to really have a framework that I think is helpful.
And we wanted to make sure we validated that, that we didn't just go, "Hey, I think this is a good framework and some questions to ask." So we looked at where people stand in their journey. We wanted to understand how organizations fit in their version. I've heard customers that did things totally differently, and I've heard customers that absolutely match the journey we're going to talk about. So we talked to over 200 people who are experts in their space. But we also wanted to go beyond North America. I think that's a very common thing when we're based in the US to not do that even though we work with so many internationals. There are so many multinational companies represented here. So we made sure to have respondents from North America, Asia Pacific, and Europe, each with their particular considerations. Also, I work across cloud and across industries. So I wanted to know what it looks like from different vantage points. So you can see here everything from financial services, healthcare, industrial, retail, logistics, really getting a wide swath of people. And so we did a survey, and we also did qualitative interviews talking to experts. So this next slide is animated. I love that. The point of this slide is actually probably not for this room with so many hands raised. But it's not too late to get started. I'm talking to people who maybe have a lot of experience in predictive AI and less experience with the generative AI space. Especially, I'd love to see hands if you have this experience in your company. But having one part of your company start piloting and testing, but not everybody gets to use it, right? Yeah. So I've talked to so many companies where we're still figuring out what it looks like in different divisions. In certain spaces, it's like, oh, well, we build AI. We don't know how to buy AI.
We build it and so what we really want to do is customize it from somewhere else. And so for the last few months I've been talking about the choice people make in terms of build, buy, and customize. I think that's an important consideration. Most often, it's all three. There's rarely one solution for someone. So this is the framework.
We looked at where do you get started, how do you pilot, what does adoption look like? And what does monitoring look like? We've got these learning systems. We have headlines. We understand these are things we have to watch and pay attention to. So in the first phase, in the assess phase, we outline what it looks like to evaluate your organizational readiness. How many of you have an AI committee, council, something like that? Hands if they get stuff done in let's say under six months? No? Nobody? Yeah. So it's really important to think about what are your company values? What are your business goals? Who needs to be together? But at Adobe, what we've done is think about generative AI to reimagine ways of working, right? Transformational technology. There's a great video that I don't play because videos always break when you're on stage ready to push play, but it has all the hyperbolic statements we've seen. Revolutionary technology, generational change, nothing like this, work is never going to be the same. And it all blends into the word hype because all of those words, what does that actually mean when you sit down with your teams inside your company and try to do this stuff? So we'd like to share the example of how Adobe did it. We have an AI Adobe committee, but they're focused on which products do we buy and how do we roll them out successfully. So I'll talk a little bit more about that as we go through, but I want to take advantage of the awesome person who's sitting next to me. Bridget, can you talk about how your team has approached the implementation of generative AI? Yeah. So we knew we were ready for it because we had a massive change in our organization, and we had to figure out redoing our processes. How can we do more with the same amount of people? And so we knew we were ready for it in that way. And so the first thing that we did was some foundational work. We needed to make sure that we established our brand ethical AI guidelines. We needed to make sure that we understood what we're comfortable with from a brand perspective, what was okay, what was not okay. And so we also have a larger AI organizational group, but we created a brand version of that, right, which was brand strategist, a creative, someone from our MarTech team, and our legal partners. And so when we did that, we sat down and had a real conversation about what are we comfortable doing and not doing. One of the big things we talked about with Adobe, gets a little sensitive about at times, but we have great reasoning for, is that we decided as a financial brand that we do not want to create people from scratch with AI, right? Which is a bold statement to say, but for us, it's about brand reputation and there's a level of authenticity that our clients are putting in our hands with their life's work and their investments in that spaces. And so for us, we were like, this is where we want to keep that line of authenticity, right? So it was really a brand ethical discussion that we had in that space. Now we use it in so many other ways, but that was just an example of some of the guidelines that we put out. And then we made sure that everybody understood these, and that they became part of our regular brand guidelines. And I love that because the first step here is define and communicate the company's standards, right? And so having a conversation about what makes sense for your business, for your business's values, for your business's financial goals, and what's going to work. In a trusted situation where you're getting all of your customers' money, they don't want to feel like they've been fooled, right? So these are important choices that your company needs to have a conversation about and figure out. And even I imagine-- I'll ask later, but I imagine a lot of you are still having these conversations about what the right fit is, where it can be used. It's an evolving conversation. We've gotten away from the six-finger problem, but that took hundreds of thousands of photos of hands. Now I learned today that there's also a teeth problem. Absolutely is.
But those are important things to know and be prepared for. I spent a long, long time thinking about how people use technology and how they understand it. And knowing what it can do is as important as knowing what it can't do. Yeah. And being able to use the system in the way it was designed that works for your organization is a critical part of that. So I love that example. Maybe some other people gave you some flack, but I think it's a really important point about figuring out where your company wants to go.
You also highlighted something in terms of working with your legal teams. So one of the things I've seen over the last couple of years is a lot of frustration at parts of the company that might be called blockers or people who slow down your progression, that kind of thing. And we know that's not fun for them either, right? So being able to look at your governance standards, making friends with your privacy legal teams, and finding out what questions they're asking and why can help you work together better over the long term. I was with a customer today who has rather than an AI committee, he was like, "Oh, we don't do that committee council thing." But what they had that I thought was so effective is they had an enablement group, we're used to hearing that term in sales. But they had specific lawyers, privacy governance people, security people who worked in almost like a pod. So when it comes time to review a new technology, they've seen it before. It's not the first time they're seeing this contract. It's not a product council. A product council can move into this role. But it's not a product council who's thinking about these things for the first time and having to do a lot of research and being risk averse when maybe it's not necessary. So thinking about what kind of considerations people use, how many of you looked at training data when you were picking a tool? Okay, how many of you asked what the system was trained on, right? Yeah. Web scraping, there's a reason we don't do it.
Figuring out AI use disclosures. It's not just the vendors you're buying AI from, but everybody wants to tell you they're with it, they're advanced, they're on the cutting edge. We put AI in everything. Thinking that that will sell you on their modernity. And instead, you're like, "Oh, oh, I don't know if I want AI on that." So knowing when AI is being used by your vendors can be as important with your non AI vendors. Of course, bias and harm mitigation, making sure that things are tested appropriately. There are whole independent companies now that will do that, but it's really important to understand what your vendor has done just in terms of their testing. And not just testing before they launch the product but ongoing testing as you continue to use it, right? We don't set it and forget it when it comes to any technology at this point, but you really can't do that with something that learns itself.
So we've already started talking about piloting the second phase here.
One of the critical points is identifying priority use cases. This doesn't mean everybody puts their hand up and goes, "I want to use AI this way." Right? This is what are our company's goals, what are our choices we're going to make. I talked to a customer yesterday. They had 2,000 use cases. At that point, there's no prioritization. And who knows what-- I don't know what you do with 2,000 use cases. What they told me was that they had stopped doing collecting use cases because they had too many to deal with and they had reevaluated how they were doing their AI strategy.
But that piloting, based on your business criteria, your goals, your metrics, I think is really important. And so we have someone who has piloted AI. Yeah. So can you tell us about what you guys did at Prudential? Yeah. And I would say that what you're saying is right. Once someone hears you've got AI, they're all saying, "Oh, I've got a use case. I've got a use case." They're not always great, right? So you definitely have to go through and figure out what is the lower hanging fruit that you can prove out. What is the fast thing that you can take and actually utilize right away that's going to prove out in the right space? We had several, but we looked at our organic creative, right? That was one space where there was not any brand consistency whatsoever. And we're like, "Great. That is a low risk use case that I can get legal on board with, that I can prove out into that space." That was one of the first areas that we approached in that. And then from there, because it was a quick turnaround, we saw success. Legal was comfortable, because we were able to pull down anything very quickly that we felt was not right.
It turned out great. And then we went through and prioritized the other business units and other use cases through there. Yeah, you talked about quick wins. Yes. Right? And having those demonstratable examples. Hey, guys. It didn't go wrong. Look. It worked.
There's nothing to worry about here. We can give you the background. We can show you the prompt. We can show you how we set it up. It's not as scary as you think. It's not as scary as you think. One of the things I was most excited about last year, and this shows my nerdy, nerdy technical background. But when AI Assistant for AEP was launched, as much as I know about data, I've never spent time doing data segmentation and figuring that out. And now all of a sudden, I could natural language query, like, "Hey, I want to target my age group," which I'm not going to tell you.
Sure. Yeah. No.
So I want to target this age group. What is a good fit for them? And it can set everything up for me in a way that I feel really empowered. And my favorite data scientist gets to go do something more interesting than answering my questions, right? This is about enabling people with tools, and being able to do the work you want to do. How many times have we heard in the last two years, "You won't have to resize an image 17 times anymore"? And that's great. But what is the exciting thing you get to do instead? I want to hear more about that.
So looking at pilots, I don't think any of these aspects is surprising. Really, was it accurate? And here's a thing that we'll talk more about momentarily in the adoption phase. But thinking about how you help people understand what they need to put in to get accurate outputs is as important as, okay, I put in this specific prompt. Does the image look like what I want? Okay, I requested that data segmentation and that marketing campaign. Is it accurate? So thinking about adoption rates, right? We're going to have some first movers, people who are excited. There was a report published last year that basically said, "You better figure out how you're going to use these tools because your employees already are." So it really is how we do it at this point, how we roll it out. But these are the things we're checking to make sure it's working as we're piloting, to make sure we have the right tool that we want to scale.
When we talk about adoption, the number one thing this is my project for this year among others, is figuring out how you train the organization. What does upskilling and training look like? Because the number of times I've had someone say, "Hey, we bought this product--" I just almost named it, which is what I got caught on. "We bought this very popular early product that comes with our office suite, and nobody's using it.
Nobody's using it." And my first question usually is, how did you make the announcement? What training did you do? Did you look at people's work processes and think about where it fit? Really simple. Did you have encouragement? So when their first three or five prompts didn't get them what they wanted, they knew what they needed to do to refine it and figure out what was next. Those are some of the really key parts. How many of you have thought about change management when it comes to generative AI? Right? If you're in charge of a team, telling them you need a 20% reduction in time spent using generative AI. Great. I have a goal. How do I do that? And that's where this training piece comes in. And so now an expected question, but-- How are you preparing? How is your team prepared? Because I think you have some really interesting examples of how you did that training and what got people comfortable. Yeah. So first up, before we even rolled out, we wanted to make sure that we knew what audience we were training a model for. So we wanted to be really particular, like we knew that we're going to deploy out and make a long-term plan. So I'm going to give this group this tool, and then I created a training plan for that audience. So a great example of that is with my creatives. So we use Firefly to create a custom model because we had a big brand refresh. And we really needed to retake all the older illustrations and make them the new version, right? And the marketing team, the creative teams, they're like, "Oh, it's going to be a lot of work." But we use the custom model, we train it the right way, so they were able to place it in, and output on brand content. Now we needed to train them in the right way, and we had to get them comfortable doing it. And I think going back before we had to have that discussion around your role is evolving, you are now a big ideas person, and a curator, and not a production person. And so now is the idea where we're curating the work that's going through here. So we spent a lot of time working with that in particular audience on how they're going to use the tool and roll it out, and it became part of their everyday process. And then I think when we looked at different other spaces where it was a larger marketing group using a tool, we knew especially that we needed to train them in the right way. So especially with the copy aspect of it, we gave them some storytelling training and how to identify what good copy looks like, right? And that became really important because not everybody is trained to see what good creative is and what good creative isn't. So important, and I would say have your creative team spend time with people doing that. One of the things I love that you described were setting up templates for people to use. Yes. Yes. Yes. Thank you. So we also created templates that were easy for people to understand and use. So we created templates for our social media, being that one space. We were able to use Express and lock things and not others in space. And we were able to walk through of this is what good creative looks like. This is what it doesn't. And we just guide them up. This is how easy it is. We broke down the barriers for them so they didn't feel so nervous to use something. And it's going to take a little bit of time. And we were there to answer all the questions when they felt like they were going wrong, right? We were there to check in, Express, we were tagged, and we opened it up. And like, "Oh, I see what we did here. This is a quick little fix." So it's a lot of you, like you said, the training and reiteration, if they're doing something wrong, you've got to have a plan to help them out in that space. Get them more comfortable. One of the things I love that you described when we were talking about this was I'm going to call it your patience level because when we talked yesterday, you had just gotten a request to hire an illustrator. Yes. So just mentioned the custom model that we spend all that time building. And I get a request from a fellow creative in our marketing space, saying, "Found a great illustrator that's going to make me a lot of illustrations. I've set up a meeting for us. I want you to meet them. And do you have any budget?" I'm like, "We just went through this." And we were giggling about it, but you have to keep reiterating these things, and you have to keep reminding people that it's there because when you show a tool and it's in that space, and there's some that pick it up and use it right away and there's others that don't. And when the time comes, they've already forgotten about that space. So there's a lot, even on our end, we think we're doing the right thing, but then you need to come back and write them. Don't forget this tool's here. Don't forget you could use it that way.
So yeah. Well, I was on a panel with one of your colleagues this morning, and we were talking about how you're doing the communication. One of the things that I found really critical for adoption is peer to peer learning. So at Adobe, we have an AI ambassadors program is what it sounds like. But I think the key feature that we really hit on was having the people who are excited about it get training on how to talk to their colleagues about it because I've been excited about many technologies over the years. And one of the skills I think I've developed is helping people see where my excitement is because if I walked up here and told you the coolest technology, transformers. How many of you have heard of transformers? Like the movie? Right? Bumblebee. Who doesn't love Bumblebee? I do. I just want to make sure we're on the same track.
But so the thing that made generative AI possible is it is a paper written by Google in 2016. It's about transformers. We don't need to know what transformers are. I still prefer Bumblebee. But the point is helping bring people along. If I got up here and gave the technical talk that I do at one of the machine learning conferences, I wouldn't have helped any of you figure out how to use these tools better. And that's the point. So when you get someone who's so excited about the technology, they may not be enabled or understand how to help their colleagues along. So when we talk about training, when I talk about training, when our CIO talks about training, it's not just a one-hour session. It's not just a working in person, but it's also training the trainer for those peer to peer mentors. So I think that's one of the reasons Adobe has been successful. Another thing we've done in training is talk about personas. How many of you have one broad based AI, generative AI training? Okay. How many of you have role based training? Okay. How many of you have no training at all? - Don't have to raise your hand. - Okay. It's okay. - We didn't, at one point either. - Yeah. Well, it's really interesting because I was talking to someone from a very well-known tech company yesterday, and he was saying, "Yeah, we rolled out this technology." And I said, "Oh, cool. What kind of training are you doing? How's it going?" And he looked at me and went, "We aren't doing any training." And I said, "How's your adoption numbers?" And we had a really good conversation.
I was at a large company that had 200,000 people and a 6-person learning and development team.
That's pretty standard. You get smaller than that and maybe you have a two-person learning and development team. Often you have sales enablement. How many of you have sales enablement teams? Because you got to teach your people how to sell your products, right? But thinking about how you sell the use of these tools. So I've done a ton of role based training because one of the very first trainings I experienced was a 20-minute AI training back in 2018. And tens of thousands of engineers told the team that built that training that, "Why did you make me watch this?" There were executives going, "Why did I have millions of dollars in people's time watching this video?" They don't know what to do with it now. The engineers walked away from that training with, "How does this fit in my day to day? Why do I need to know this stuff?" And so we did role based training. It's been a point I've made at every company I've worked at for the last seven years. You have to think about how people are going to use the technology. And it's very true that we don't have huge learning and development teams or even large sales enablement teams. So what do you do? Well, now you can not only pull courses from the web or training from the team of the product you're buying but there are really great video tools out there to create five-minute snippets. Show me how to write a good prompt.
That kind of thing that doesn't take a learning and development team with a huge budget to figure out.
I have a question. I know we're going off script here. I'll ask the questions for us, guys. Don't worry.
But do you ever have the training and then marketing? So do the marketing teams train the marketing teams too? Like our MarTech teams? Do we ever see that-- I know that because personally, that's where we are. So I'm curious. Because you keep talking about L&D. Yeah. But some of us have MarTech teams that are doing that training. So just curious on your thoughts. So my product marketing team happens to be sitting in the audience. Thank you so much for coming to my talk at the end of your very long day. Trainer.
Yes, they do that. See? So they were on a video for us a week ago talking about what this agent-- Who's excited about agentic AI? Okay. Who can define what agentic AI is? You're with everybody. Don't worry. The best description I've heard is it's text to action, right? I like that one. It's really short and sweet. I talk about it in terms of going from getting information from a system to that system taking an action for you, creating you a PowerPoint, going to a website, doing some sort of other action.
But we needed every Adobe employee to be ready to talk about agentic AI. And so some of the people who did that training are in this audience. - Wow. - Yeah. It's good. It's very good. Now we've heard it seven million times in two days. So you've done a great job. Okay. I was on a panel this morning where your colleague was like, "I would like a dollar for every time I've heard agentic AI in the last 24 hours." And she was like, "And then I could go spend three days at the craps table." Yeah. I thought she said dollar instead of a shot. - That's good. That's good enough. - Yeah. I know somebody else brought up a shot and we decided 8am was a little early for that. Yeah. So thinking about how you deploy this stuff and really how you're going to roll it out in a way that's successful because the last thing you want to be doing is answering questions on why are 1,300 people who have licenses not using it.
And that's going to happen. That's actually a normal part of tech adoption, right? And so I can tell you looking at our teams, what we decided to do was look at a persona based training system.
So if I think I'm going to do a training for the engineering team, well, we have the product managers who need to do PowerPoints and roadmaps. Of course, we have thousands and thousands of coders. We know what to do with them. Here's a coding tool, right? But if you assumed what they all needed was a coding tool, you'd probably be wrong.
But you also don't want to just go, "Okay. Everyone in finance gets this training. Everyone in sales gets this training." There are ways to think about how are they using in their daily process. So at Adobe, we have four personas that we've looked at from our builders to our-- I'm going to just call them the speakers, talkers. People like me.
I'm like, "Tell us more. What are the other ones?" You know what's funny is I built that whole deck last week and I didn't put it in here. And now I'm like I really should have. I guess you'll have to get in touch with me after this.
- I'll share it with you. Don't worry. - Thanks. So in our research, what we found was some of the main considerations in AI adoption. I think this is fascinating. This is what I was saying when I was previewing surprised me.
And it could be a fluke data, per particular audience. But efficiency and time savings, we had to add a little tiny dot for it.
And two years ago, this is all we were talking about. Productivity, time savings. You're not going to have to do 17 resizings and you can go spend time developing a better brief or working on bigger picture strategic items. What's really interesting to me is now we're talking about economic gain. What is your business value? And I was at a lunch this afternoon with executives from financial companies, from media companies, from manufacturing companies, and from retail companies, all at one table. It was great. I think we had a really good time. But a lot of them were talking in a way I've heard for the last couple of months. Productivity is now obvious. That's expected. But we're being asked, what is the value gained? How does this improve our bottom line? One of the things I'm seeing reported out are costs avoided. Hey, we didn't have to hire five engineers because we were able to do this with text to whatever.
I heard another person say, "I told my team they needed to be 25% more efficient. I bought them AI." Figure out how you're going to do that. And I was like, "Oh, that was a little rough." I don't know how I'd handle that.
Probably with more training.
I have a theme. It keeps coming back.
So it doesn't end with the buying and the rollout, right? This is a cycle, my colleagues, people have heard me speak before, I've always got a circle and a cycle for you.
It's important that these are feedback loops.
You have the technology. I already said my favorite line of, "set it and forget it," which comes from a crockpot commercial in the '90s. I don't know if you all remember that. Why are you talking about it? She's like, "You just got to set it and forget it." So we can't do that with this technology because it learns, and it can go in surprising directions. And even more importantly, you need to be in contact with those vendors who are providing it. You might decide to buy it, build your own. But Adobe, we're happy to talk about our ongoing AI governance, our ongoing AI assessments, how we keep looking at new features and evaluating what's there, what's coming, thinking about what that monitoring is. So if you're building it yourself, there's tons of guidance out there. You can go get a NIST, risk management framework with a playbook. You can go find the Singapore E-Verify System, now AI Verify System. You can try and figure out what level of risk your AI actually is by reading an 88 page EU AI Act. I wish you good luck.
But we all have risk management processes. It's highly unlikely you're a successful business without risk management processes. So the real question is, what do you need to add? What do you need to evaluate? And figuring that out. So if we think about monitoring that performance, what are the metrics you're looking for? Is it return on investment? Is it how is it drift? Who's heard of model and data drift? Okay. So if these systems start learning on their own outputs, they can go in unexpected directions. And there are tools to detect when that is happening. So that can be part of the consideration for your monitoring. If you're thinking-- It doesn't even have to be that technical, right? You've trained a custom model. It matches your brand. It's doing great. And six months later, it starts doing some weird stuff.
And then you need to figure out. It's a constant process of looking at it. And that doesn't-- That's not like a ton of work, but it's knowing that in six months I need to just check this and make sure we're still going well.
That risk assessment really helps when you have documentation.
Okay, how many of you have written a marketing brief? Right? You want to document what the expectations are, what you're looking for.
There was a great conversation in another session about where the biggest time savings were was in reducing the number of handoffs needed to complete a project. Every time you hand it off, you need to document, you need to figure it out. That documentation is really important.
When we're talking about AI, I've been working on what is a great form of documentation for over seven years now. In 2018, me and a whole group of people published seven different formats. Well, that's not helpful to you all. Which one is the right one? The answer is there isn't a right one. The really important thing is that you update the documentation you have. That's what matters, right? Is your documentation current? So I'm not asking you specifically about documentation, but you do have some things that you're watching for and that you're checking on. Yeah, 100%. So there's some that are a little bit more obvious and basic like through our Express tools or our custom models. We still have and force in creative touchpoints. So they're checking, right, monitoring about how someone's using on it, using it. And if they're not identifying good creatives or things like that, we can step in there. And we do that until we feel really comfortable with the group or the specific person. Another really good example of that is we built this really cool multi-sensory AI tool for an end user to help see themselves thrive in retirement, right? It's a struggle. We don't have a physical product. So we built this tool with McCann, our agency of record. And basically, what it does is it you take a photo of yourself, it ages you until you're about 72. And it answers all these questions as well. You ask these questions and it outputs a story. One that's unexpected and whimsical and different from what you thought. It's a really cool experience. We launched it in Aspen, at the Aspen's Ideas Festival. And in two days, some of the feedback we got was "That doesn't look like me," or "I'm not seeing myself." And so we realized that AI tends to lean towards the mean, and so there was some bias happening in that space. So we really had to do a lot of work on prompt regeneration, but what the biggest thing we did is we added in an app that we developed for moderation with a human behind it. And so now we were checking to see, yes, this image is good. This is things. And we would be able to see if something physically the AI missed, right? If there was a disability, or the skin color is not right, or someone's hair is improper, we actually were able to put that in the app that we built, and get it more-- Someone's teeth was messed up. Teeth, it's a big one. We were able to alter that and then send. Now we've been working on this for over a year now, and we continue to work on the prompt development, right? We continue to add things in, and look at something that went wrong, and try to teach the model. Well, it's also been learning as we go along. So our prompt regeneration-- Our image regeneration went from 85% down to 8%. And the output was from eight minutes to an experience, to three minutes to an experience. So it is learning and it's going and that feels really good. But that was part of the moderation that I'm really happy we added into that experience.
It's such a great example. - Yeah. - Right? Of figuring out how you address the issue. Yeah. And then tying some really great numbers to take back to your executives on how this worked and what the improvements were. I appreciate those numbers because we know we're in a constantly changing world. Like now we have agentic AI. But we're also seeing a lot of different perspectives on how we should be doing this. In case you're wondering, the colors don't mean anything. It's just visually easier to not look at one color across the world.
That prevents a question later on, I swear. So the standards have been being developed for a while. My usual catchphrase is generative AI is a 70-year overnight success story. Because it was 1950 when Alan Turing wrote a paper about what would qualify as artificially intelligent. That some of the technologies we're using today, like deep learning, neural networks, those are over 20 years old.
So the regulation is out there. The standards are out there. This year, we now have an IEEE standard, an ISO standard on how to approach AI implementation and what the right things to do around it are. But I promised practical guidance, right? How many of you have been involved in the writing of your company's employee use guideline? Okay, how many of you need an employee use guideline still? Everybody. Okay. Tip. Obviously, we put it in this paper, but I highly recommend the future of privacy forums, employee generative use AI checklist. It goes through-- It's a really good evaluation from lawyers and experts in regulation and policy of what you might need in there to make sure it's okay. So we're going to have three examples for you.
Data sensitivity, you'd be surprised that making sure you know what level of sensitivity is really important because saying don't put any PII into a prompt is not functionally helpful. I'm unlikely to put someone's name in there anyway, but what do you need to think about? What is sensitive data? For those in my field, we're all still very familiar with a tech company that had banned the use of generative AI. This was back in 2023. The first week they allowed their employees to use it, someone put in trade secret code.
I'm like, "Well, it's banned again." So it's as much about thinking about how you extend your existing employee guidelines.
Speed sometimes is better than perfect. Having something out there so people know what they're supposed to do, reminding them that they should be using their employee credentials for the approved tools.
It's really helpful to think through these things as a risk mitigation. So I have had good experiences sitting down with legal and saying this is what we want to put in the employee use guideline, and that's showing them that we get the risks they're worried about, right? So what was your experience with employee use guidelines? You talked a little about brand guidelines. - Brand guidelines. - Yeah. And I got to be honest, employee use guidelines, I don't think have been fully developed. We just know what we're allowed to not do and not, but they haven't been as in-depth as what you're talking about. I mean, it's important. But one of the things I do love that you did is they're in your brand guidelines. Yes. So people don't have to go somewhere else. - Yes. - Right? You're not like, "Hey, did you read that thing on our interior internal Wiki site?" No, it's in the same place I look for other things. - Yep. - Right? I think that's a really important success point. I know. I'm like you're using the wrong font and also read the brand guidelines on AI. We'll just send you the same spot. It's great, right? I mean people don't want to read the guidelines in the first place. So having them all in one place is helpful. Yeah. You and I talked a lot about the vendor questions and what's important in asking your vendors. Yeah. Not just for quality but also for knowing what they're doing. So I'm going to flash up some examples of the questions that are in the paper and the answers. I think it's really critical there's plenty of lists of questions. But what answer are you looking for? How do you know what a good answer looks like? Yeah. Because I have seen a lot of procurement questionnaires. And sometimes I look at them and go, "I don't think you even know what you're looking for in an answer." We can provide all the information, and it's still not enough because they don't necessarily know what they're looking for. So what do you guys look for in your vendors? Well, I think first thing we ask is what is it trained on? Have that conversation. We obviously, in financial services, need the closed learning model, right? So that becomes very important. And I think the other thing is indemnification part of it, right? We have a lot of conversations of, are we protected? And more and more companies are adding that on as a space there. But those are really the main important questions that we're asking. And then the last one is how easy is it to implement? - I like that one. - Yeah. Are you going to be there to help us when we have questions? Yeah. Are you going to just drop it and run away? - Hopefully not. - No. When I came to Adobe from an AI expertise space, I learned about the field of customer success managers. And it was a really interesting concept to me. We don't just drop it and run away. No. You do not. So a little bit about how Adobe does this stuff. I think it's helpful to know.
I'm very proud of my training and previous speaking because when the time came for them to build the elevator pitch for our AI Assistant in Acrobat, I got "Your data remains yours" and this slide. And I put it up in a room where I was like, "This is really helpful." We know customer data is not used to train language models. We need to say it more. It's only looking at the documents that you tell it to. Your people are in control.
They have to already have access to the documents. Pre answering some of the most common questions. But I put it up as an example of a great slide, and it was really fun to see three colleagues in the background say, "Oh, yeah, we saw your talk, so we made this slide." I was like, "My work here is done." Because I got product marketing management on board in a really great way. We also very publicly shared last fall our approach to generative AI with Firefly and these nine commitments. I know they're very small, but I think it's helpful to see them in a perspective. If you go to Adobe's AI Ethics page, you can see all of them with more information.
But things like we do not and have not trained Adobe Firefly on customer content, a commitment not to scrape the web when building the Firefly in its systems. And I think these are things that are very aligned to credentials we won't generate humans. - Right. - Right? There are things we see as common practices that we see in headlines, that we see in news articles. And knowing whether or not your vendor is doing them can be really important. - Yeah. - Also, don't assume that they-- Don't assume, ask. Because you'll be surprised. What was the-- Did you get a surprising answer at some point? Yes. We won't mention who it is, but after asking, they gave a roundabout question, and my MarTech partner said, "So you do?" And they were like, "Yeah." - So that was the end of that. - But yeah. We're not going to name the vendor or the question, are we? We will not? So we want to leave a little bit of time for questions. So a last like wrapping this up, the four-phase framework, you're all taking pictures. I love it. Your phones are ready for my next slide.
But we have this four-phase framework, assess, pilot, adopt, and monitor. I was in Europe last month for the French AI Action Summit. If you have multinational colleagues, we have it in French and German.
But this is a link to get you directly to the paper we've been talking about. I think it's really helpful to have these resources. It's also really helpful to hear from our peers. So what have you learned and what are your top tips for our audience? Yeah, I think first off, it's not going to happen overnight. It's going to be an exhaustive process. I think when someone's like, "What's the one word you think of when you think of AI?" I'm like, "Resilience." And then I said, "Patience." It's going to be a process for you. So just hang in there, push through it, it's going to be worth it. And I think bringing your partners along is the most important thing. So making sure that you're working really well with your creatives, and your legal, and your MarTech partners and all the different spaces, that's what success looks like right out the gate, right from the beginning. Yeah. I want to thank you all for coming at the end of a long day. I hope you enjoy sneaks with Ken Jeong. He is hilarious. And I know someone who gets to go backstage, and I'm very jealous. Thanks for having us. This clearly-- - [Man] Thank you. - Is the Ethical Group. So proud of you all.
[Music]