[Music] [Guillaume Clement] All right. I think this is it. We're ready to get started. Perfect. Well, first off, thank you for coming by. Welcome to our session here. We're going to get a little bit technical in this session today and take you through some developer tips and tricks, focusing on Scaling Creative Asset Pipelines using Generative AI capabilities. We've been talking about that for a couple days now. And take you through some automation, some workflow automation looking to essentially show you how some of the sessions you've seen so far are put together, including some awesome features we saw at the Keynote. So we're going to take you through the back-end pipelines that make all of that possible. So my name is Guillaume Clement, Gui for short. I'm a Multi-Solutions Architect on the Firefly Enterprise team. And I'm joined with Marcel. [Marcel Boucher] Marcel Boucher. So I'm also part of the Firefly solution acceleration team. Been at Adobe 28 years, so that's why my hair is white. Just trying to learn all the tech and keep up to speed. Yeah, we're going to share some of the tips and tricks that we've basically learned in the last 12 to 18 months, implementing Firefly Enterprise solutions for our customers, and basically the insights that we're seeing. So what we want to do is share with you the stuff that we've learned building out POCs and demos last, like I said, last 12 to 18 months.
Awesome. All right. So one of the things that we have heard consistently is that creatives don't have time to be creatives. So what they're doing is they're spending a lot of time updating tasks, doing repetitive work, and spending a lot of time just doing the manual stuff to be able to feed the content marketing machine. Right? So we hear a lot about the explosion of requirements of content and being able to deliver personalization at scale for enterprises. A lot of that burden gets put on the shoulders of the creative teams that they need to deliver that content and also need to be part of that process. So creatives only spend about 29% of their day on truly creative tasks, and we want them to spend more time, do more creative stuff. If there's one thing that Generative AI cannot replace is human creativity. So what we want is we want those creatives to be able to focus on that and not on that repetitive work.
And this image, as I said, I've been at Adobe for quite a few years. We've used this image over and over again. So you may have seen this in presentations from Adobe, and ultimately, it resonates. It's true. Right? You have all of these teams that have been put together over time to create content for organizations, and ultimately, it becomes this spaghetti of these different individuals that work together, that use different systems to collaborate, different repositories. There's no single source of truth. So ultimately, you have this zoo of content that you need to try to wrestle to be able to deliver personalization at scale. So today's focus, what we're going to be focusing on is all the way from mock-ups to delivery. Right? So how can we streamline and optimize that process, and ultimately, break down the silos? So that the teams can work together, and they can collaborate using the same assets, not having to recreate and use the famous file names final, final, final, and draft final one or whatever you want to call your naming conventions. How do we get to a point where there's a single source of truth, creatives are still in control? They define the brand, the layout, the look and feel, but how can we produce content at scale with humans in the loop to be able to make sure that the content that is generated is on-brand? Absolutely. Yeah. I'll add one thing here. One of the biggest challenges that we see when we're working with organizations and transforming these solutions, frankly, is change management. There is a lot of change required to enable that collaboration across teams and to make all of this automation and implementation of these generative capabilities possible. So that's why we want to show you this image here and spend some time talking about where we're going to focus today to take you through where we have opportunities to enable collaboration and automation to make these processes slightly more efficient.
Awesome. With that, we recognize that every organization's journey is going to be different. There's many different stages of adoption that we're dealing with, when we're adopting new technology in general. But, of course, AI being such a disruptive technology, we have to understand how to take a phased approach to our journey or our implementation of that technology. So in the early stages, we have a lot of customers that are exploring with tools, creative tools that are frankly, all over the web, and we're trying new features out. We're using these tools manually. And in a one-to-one silo, we normally see workflows that are fairly manual. We're essentially taking data from one platform to the other, and we're experimenting with these features. The next stage in our journey is accelerating that in an assisted fashion. And so here, we're still relying on some of these tools and manual workflows to put them together, but we are able to achieve some great creative workflows by combining these capabilities together. As we progress, though, we see that repeatability becomes an issue. And so we're able to continue using these tools manually, but, of course, we're not able to scale them. And so in the repeatable stage, we start thinking about the optimizations we have, with programmatic automation. And so that's what we're here really to talk about today, talk about these automated pipelines, essentially, the workflows we're going to take you through, of how we use each individual generative and creative feature, putting them together in order to take our content and our assets from ideation all the way through to activation. Lastly, we're talking about some of the latest and greatest features. We saw Anil on stage yesterday showing us our agentic capabilities. And so we're now starting to see a transformation towards intelligent workflows, where we're able to use agent or multi-agent workflows to bring these capabilities together in a way that requires a lot less intervention. And so it's important to recognize where we are in those journeys. Not every customer is going to start at the same stage, and frankly, not everybody is going to need to get to the same stage at the same time. And so what we're here to talk again about today is taking you through, at the very least, the programmatic approach and seeing how these capabilities can be put together. Yeah. And actually, a poll for people in the room, who here is on the explorative and assisted area where you're using GenAI or AI capabilities, but in a manual fashion.
Right? So good chunk of people. And then repeatable and programmatic, actually to that next level where you're trying to do automation and those kinds of things. A little bit less. Right? And then as we get to intelligent, anybody feel like they're past the-- We figured out automation, now we're just going next level? Yeah. Exactly. Right? So that's the experience that we're getting with our customers. Yeah.
Great. So here, we're going to take you through three key stages. We were showing you the great diagram here earlier, but we're going to take you through three key stages, in our session today. We're going to talk about creation and the importance of creating content and also templates that are optimized for automation. It's funny to think about that. We talk about creativity as being an agile open environment, but in this case, we are leveraging specific features in the creation stage that will help us automate. And so that's a key point we're going to want to talk about today as we take you through these features. We're then going to talk about automation and enabling the collaboration across not only individuals, but with individuals and systems. And so how do users and creative folks and folks on our team, how do they interact with these programmatic capabilities to leverage our generative and creative services? We're going to show you some of those examples today. And then lastly, we're going to talk about activation. How do we take this content, these assets? How do we push them all the way through our activation platforms and perhaps even our activation channels, such as social or web content or or all of the omnichannel asset formats that we may want to be publishing? Yeah. And I like the middle image, Gui, that's on the slide. Because ultimately, what we need to recognize is we're bringing creatives and technologists together. Right? So not necessarily kissing, like this slide is almost showing. But ultimately, it's bringing those two groups of people together. Right? So typically, creatives have been working in isolation. They create the images. They do a lot of that stuff, images and videos and whatnot, manually. And then it gets handed off to the technologist or the marketers or the communicators that need to take that to the channels. Really, what we're here to talk about today is how do those two roles come together. Right? Because ultimately, creatives are going to need to work hand in hand with technologists to be able to leverage the advantage of automation.
All right. So we're going to talk first phase, we're going to talk about creation. So creating these templates and creating the content that is going to help us start this automation. The first phase in creation is what are we creating? So there is an element here of creativity. We're looking for new ideas. We're looking to experiment with platforms that are going to give us new insights and new ideas on what to create. So we're going to spend a little bit of time talking about remixing and transforming content using some tools like Project Concept. I believe you may have seen that in some of the keynotes today, if not yesterday. And we're going to take you through a short creative journey, using those tools. We're going to talk about extending and repurposing, existing content. So we saw in many of the presentations taking imagery from products or other key art assets that our customers and our organizations already have available. How do we remix these assets? How do we generatively modify them? And how do we also start activating these assets by taking them through automation? So with that, we have a quick technical demonstration. We're going to do this a few times during our presentation today, but we're going to go through this Project Concept demonstration. So bear with me as I switch screens here. And as Gui is doing this, I think one important thing to highlight is what we're going to be walking you through today is the stuff that we do day to day. Right? So as we build these demonstrations, etcetera. In fact, the most of the demonstration that you saw in the Keynote on day one that Wes and Anne showed on stage is exactly we're going to show you how we built that demo. Right? So not the end-to-end thing, not the entire thing, but for the most part, we're going to show you a lot of the key moments in that video. That demo, that's how we built it. Absolutely. So let's take you through here, the ideation phase of a creative using a tool like Project Concept. So we have some brand guidelines. We have a little bit of sample imagery. And overall, we're looking to start generating new ideas for our outdoor experience campaign. Noticing a typo in the title here that I left earlier. So using our features and Project Concept here, we're drawing inspiration from existing assets, existing imagery. We have some images that we generated. We played with that on Canvas. And the first thing we want to do is draw from the colors, that we have here, in this image. So we are able to use Project Concept features that let us refer to existing content or extract certain aspects of an image. In this case, we're picking up vibrant colors, from the example image that's available.
We're then going to play a little bit with some prompts. So we're typing out a prompt here for an origami styled image. And we're using Project Concept here to invoke our Firefly capabilities. So our Firefly models are the ones creating the content here within the Project Concept experience. And with that, we're seeing a couple sample images. So we have the ability of putting these images on the canvas and essentially, using this mood boarding experience to start experimenting with these ideas, playing with concepts, borrowing concepts from other styles and imagery, and essentially putting together again our ideation content.
So with this image, we're satisfied with at least this version. I think it it looks quite good to put on some of our assets. We're going to actually going to go through and start creating variations of this asset. We'd like something a little bit larger to start using in our templates and in the rest of our content. So in this case, we're going to use another Firefly feature, Generative Expand within concept, to take the source image that we have and make it a little bit wider, give us a little bit more real estate to work with. This sounds like a simple concept. We just want to make the image a little bit bigger, but it is one of the first steps we're taking in making sure that we can use these assets in our automation. We understand that we're going to want to target multiple different aspect ratios. Not every aspect ratio is going to look at the same portions of the content. And so for that reason, we want to make sure we have an asset that we can work with here, some key art that'll lend itself well to our automation task.
So here we have it, the same image, generatively expanded. We see it actually gave us a little bit more space in the image to work with here. And that is, once we select the right one that we're looking for, that is what we're going to use here in this case for our ideation purposes. So this is just a quick showcase of some capabilities in Project Concept, but wanted to show you some of the features here available, as we start playing with key art imagery and as we start putting together this content into our templates.
All right. Perfect. Now let's talk a little bit about templates. And so we were talking about automation and the role that templating plays in that. So we're going to take a moment to discuss that right now. Let me scroll over to the correct slide here. Perfect. - Awesome. - Right. So the role of templating. Right? So this is where we're really asking the creatives to do something different. Because typically, a creative professional that is working in InDesign, Illustrator, Photoshop, whatnot. They're used to creating their assets with multiple layers that they potentially hide, and there's not necessarily...
A process or a structure to the template or to the file. Right? So really, what we're asking the creatives to participate in this new journey of automation and Generative AI is to participate in adopting certain best practices and how can you optimize or prepare template for automation. So there are new things that we're going to be relying on our creative professional colleagues to do differently. Now when we think about that, we think about the structure. Right? So when a creative hands off a template to somebody like myself, who's a developer, I'm not necessarily going to understand their thought process on how they named the layers. They probably didn't even care what the layer names were, as long as the output was what they wanted. Now what we're asking them is, they're going to need to identify those layers. Right? And potentially using a naming convention, and we'll show you in the next slide some of the naming conventions that we use regularly, to be able to address what's variable and what's not. Because ultimately, the creative needs to be in control. Right? So they want to control what the template's going to look like, what are the things that you can and can't change. And here as an example, you can see, Gui went to the next slide. We have these handlebar brackets around all of the text that's variable. Right? Or the layered names that are replaceable. So I don't know if you can see it on the screen, but the red arrow point at the background layer name. So those are things that basically, the designer is saying, you can change these things, but everything else needs to remain intact. So what we're going to be leveraging are the Photoshop APIs to be able to replace and swap out that content, be able to generate those variations. Absolutely. Yeah. And on the last slide, we're talking about the role of templating and selecting the right type of templates. We're using all of these desktop tools like InDesign and Photoshop that used to be desktop. There are a lot more than that now. But each one of these tools is creating the type of content and the type of designs that are suited for specific channels. We can think of all the amazing features we have in Photoshop for manipulating art and manipulating images and etcetera, of course, templates as well. And then we have applications like InDesign that are better suited for print channels and the ability to control things such as the color palettes, the bleeds, the margins, etcetera. So selecting the right platform, the right tool for the right job is important. We're going to talk about interoperability a little bit later between these template types as well, but what we're able to do with Firefly Services and all of our generative capabilities is take these templates. Once we get them ready for automation, we're now able to start using and leveraging these creative assets within a generative and creative process, and that is the first step in essentially achieving a little bit more efficiency there as we start using these template types. So I just want to point your attention. Marcel, you mentioned, variable elements in the Canvas like text. In this case, we're not hard coding any text in these templates. We're actually ensuring that the text that gets added to these templates is coming from a variable source, perhaps one of our data, feeds that we may have or another system where we're actually managing our content. We're now able to merge that with our design layouts. We're also using multi-layout designs. So this particular template has a certain style and a certain template elements that we're adhering to, but it is doing that with multiple different aspect ratios in a single document. So we're able to keep all of these things together. And once again, with automation, be able to create all of these assets in the various aspect ratios that we're targeting from a single template. And then lastly, reusable objects, Smart Objects in Photoshop. For those of you who are Photoshop power users, how to use some of these features. Smart Objects are a way for us to embed imagery and reuse it across all these artboards, again, making automation much simpler.
So now we'd like to take you through that a little bit and show you some of the examples of our templates. Also, as Gui is switching over, you probably saw that we were showing a Photoshop template in that case. And we had just said-- If you wanted to do text, InDesign is probably the better template format to use. So the reason why we showed you a Photoshop template in that case is because the headline is always short. The subject or the byline is always short. Right? Because ultimately, if you're going to do automation, most of the time you're going to do multilingual. Right? And you're going to be doing the different aspect ratios. So again, you need to keep in mind, how long is that text going to be. If it needs to reflow and wrap, Photoshop is not the right templating solution. InDesign is a better templating solution for that use case. Right? So that's why it's very important to keep in mind what are the outputs, what are the use cases, and where are the potential areas that you might struggle if you're using the wrong templating technology for the use case that you're trying to solve. Absolutely. And I see some head shakings in the audience. So there's actually been some experimentation with these as well, which is great. Absolutely. So Marcel was saying a lot of the lessons learned we're talking about here, of course, is experience we've had delivering these solutions with our customers over the last 18 months and absolutely, those are things that, we find out quite quickly as we start using templates like these for localization or regionalization, translated languages, and texts of variable length. And so it's important to look at the right templates for those use cases. So here what we want to show you is essentially what the creative process may look like today, using an application like Photoshop. So we have a template here that's being used in Photoshop to author our key art. So we're actually adding the image here to this template, the one we were just looking at in Project Concept. So we're assembling our version, essentially, our copy of this image here. This is what perhaps the look of a creative looks like on your team today. Or if you are a creative, maybe you're doing this already today. We're making our changes to, in this case, one of the aspect ratios for our image. We're inserting text into this image and preparing this asset essentially for our needs for our camping.
And you can see that as we're doing that, we're taking our-- I don't want to say we're taking our time, but we are taking some time, certainly, performing these updates manually. We're now making some adjustments to the key art, make sure that it looks good here for the image. We're adjusting the brightness, and we're seeing essentially what the work of a creative looks like as we're creating this one specific asset in English only, in this single language, with only one aspect ratio being targeted.
So now we want to show you what it would look like in a multi-artboard document. We were just showing you that in the screenshot. And so a multi-artboard document uses some of these variable elements, such as variable text. It's assembling all of our aspect ratios in the same document, on the same canvas, and it's giving us the ability to use Smart Object capabilities like we were talking about to adjust our key art, make our changes or updates or configurations. Perhaps we're using overlays. Perhaps we're using any of the features available in Photoshop to make sure that the art meets our standards. And then using our Smart Objects as soon as we save the Smart Object and we return to our aspect ratios here, we see that using just simple features like that, we're able to create all of these different aspect ratios by editing the content a single time. Now we're going to take that a step further and we're going to stop creating this content manually. We're now going to start using automation to achieve this assembly, but we can quickly see how just using some of these simple features is the first step in preparing this content for automation. And another important note is we're talking a lot about replacing Smart Objects and text. But also keep in mind, with Firefly Services and the APIs, pretty much everything that you can do in Photoshop on the desktop, not everything. There are certain capabilities that rely on cloud and certain specific capabilities. But for the most part, what you can do in Photoshop on the desktop, you can do using the APIs. Right? So please don't walk away thinking that you can just replace text and layers. If you wanted to change blending, you want to change, saturation, hue colors, those kinds of things programmatically, those things are all available as well. Yeah. Absolutely. Now we've been talking about Photoshop, and we were showcasing earlier the importance of selecting the right templates. So let's take a look at what we can do here in InDesign, which is a similar templating concept. We have certain use cases that are better suited for platforms like InDesign than Photoshop and vice versa. If we're looking for advanced image editing, we probably are going to do that in Photoshop or with Photoshop services and capabilities rather than in InDesign. But we have many use cases that actually blend both of them. So we use interoperability across our templates, in order to bring together certain creative designs through the Photoshop or others through to InDesign using the same key art, using the same text, targeting many different aspect ratios. And so that is one of the advantages we have of being able to automate these aspects when we set ourselves up for automation using these templates. And also if you want to do true omnichannel. Right? Because as I'm sure a lot of you know in the room, InDesign is awesome at doing print large scale and large size print. Right? Whereas Photoshop has some limitations around the size. So bringing those two things together where you can do key art in Photoshop, then lay that onto an InDesign template is really where you start to unlock true magic with building these assets for true omnichannel.
Absolutely. So happy to take you through that. And let's cut back here to what we were discussing around our templates. And let's go through a quick recap of that.
And so as we start leveraging these templates, Marcel was just alluding to the fact earlier that when we're editing or authoring this content, we oftentimes end up with a very interesting naming convention. Right? We end up having to have final, final, final, extra draft. This is really the last one .psd, and so that becomes a big challenge. So not only is automation important in setting ourselves up for automation, but the next logical step is collaboration. Being able to collaborate on these assets, version these assets, and start using our automated features within these collaboration platforms. So what we're going to take you through now that we talked about templates, we talked about ideation and content, is just that. We're going to talk about collaboration in an application here, Frame.io, Adobe Frame.io. I don't know if anybody in the audience has used Frame.io before. Yeah? Great great simple application, great collaborative features, the ability to connect directly with our creative tools and power automation, which is amazing, the ability to centralize their feedbacks or commenting and approval, within this application as well. And as I mentioned, automation friendly is a key aspect. So what we're going to look at now in Frame.io is not only how can we collaborate with our team, with the folks around us, and get approval, on our assets, but how can we collaborate with our automation features in order to automate specific tasks? Yeah. Another super important note to make is, when people hear the word automation, they can start get getting nervous, because you automatically think or a lot of people will think. It's like a lights out thing. I'm going to start generate a bunch of assets, and then I'm going to be stuck with all these assets. Right? So it's super important to have something like Frame. It doesn't have to be Frame, but have some platform where you can have a human in the middle or a human in the loop. Right? That verifies the assets. So what we're going to show in this particular scenario, is that we've broken down those steps into very discreet actions, which we're going to see in a second. But ultimately, you can combine those. So it depends on the organization. It depends on the requirements of your business. But you feel reassured that when you're doing automation, you can do it at different levels, and you can break it down in different sizes so that it meets, like, if you're highly regulated and you need to make sure that everything is verified by a certain team, you can achieve all of that with a proper review scenario. Awesome. Yeah. Let's take you through that in Frame.io here. And the first thing we're going to do is take the content we had. We had some imagery. We had some templates done in Photoshop. We're just going to go ahead and upload those to our Frame environment here, to our new project. So taking the imagery we had from Project Concept and our Photoshop file, take look at those. Quick cut. Perfect. And so here's the same image we were just playing with in Project Concept. This is actually an even larger version of it. We extended even more, so preparing the asset for automation.
Now what we can do here, just simple review statuses. We're able to start setting labels on our assets and taking it through, essentially our approval here. So I'm going to leave a comment for Marcel. Marcel, I'd like to know your thoughts on this imagery here. And of course, we have all the notifications that'll be triggered here. And now this approval information, both the status, the commenting, and all of the file information will continue living with this asset as long as it is in Frame.io. And, of course, once we're done, with the approval process, we could also take this information and persist it through with the asset, all the way to our other systems where it's going to be activated.
So great. We have our key art. Let's take a look, here at some of our capabilities for automation. So the first thing we're going to take a look at, just to simplify things here, we're actually just using data, a spreadsheet, very simple, in Frame.io. We understand that data in your environment is likely going to come from other systems where you are authoring some of that content. But here, we have a simple spreadsheet with just a locale, a headline, and then if I navigate to the second page, we have a sub headline and some references to our imagery. So very simple concept. We're just preparing a little bit of data to use in our automation interaction that we're just going to show you now. And to Gui's point. Right? So the fact that we've named the fields both in Photoshop or the layers, the objects in Photoshop that we want to replace the same as in InDesign, that same data file, that same spreadsheet or that same data source depending on where it's coming from in your environment can power those two templates. Right? So you'd be able to interchange that data. Yeah. That's a great point. So here we're using a Photoshop file, but let's go ahead and leverage our automation and extensibility capabilities in Frame.io to invoke a new action. Here, we've configured Frame.io to have specific new actions that are leveraging our automation platform. We're going to show you that in a moment and show you how these workflows are put together. But in this particular example, let's go ahead and render our template. So what we're going to do here is click on our template action and be prompted to select our data file. What data would we like to merge with this template? We're going to select the spreadsheet we have available...
And we'll get confirmation that, in this case, automation is working on it. It's important to note that not every feature in the automation pipelines that we're implementing are going to be instant. There's a few of the features that are going to take some time to process assets, especially as we're talking about assets at scale. We're going to have to power through a lot of content, a lot of assets, assemble some imagery. Sometimes we're dealing with large file formats as well. And so that's why we're using here in Frame.io, some features that are going to happen asynchronously as we continue, looking at our content. - Right. - Oh, sorry. I was just going to jump in. I was going to say, as we generate this particular rendition where we've merged the text and the images, it's actually several API calls. Right? Because first, we're going to do a get manifest call on the PDF. Sorry, the PSD. So we can get all the layers we can get that those names that are replaceable. Then we're going to do a call to do and edit PSD so that we can replace the Smart Objects. We can replace the text, and ultimately, the resulting PSD document, which is still layered, still editable. So if the creative's not happy with it, they can download it out of Frame, open it up in Photoshop, make changes, put it back in, keep going. Absolutely. Yeah. So great point to mention. So this is still a Photoshop file, a fully layered document, and so we have all of the creative controls available here directly. In fact, if we wanted to take a look at the individual aspect ratios of each one of these documents, they're currently combined in the same Photoshop file. But in another automation operation that we've included here is to extract artboards. So if we click on this one, once again, automation takes over and starts creating folders for us where we're going to go find our new assets. In here, we are now seeing automation create our multi-artboard-- Sorry, our single artboard variations. So we've taken the Photoshop file we had, and we've essentially separated it into all of these specific aspect ratio, or all of these specific formats, in individual Photoshop files. So once again, giving us all the creative control we still have on these assets in a fully layered document.
And also one point to be clear is all the operations that Gui is doing to trigger the automation, those aren't out of the box with Frame.io. Right? So I just want to be clear on that. Those are extensibility or custom actions that you can configure inside of Frame. So super easy to do. Actually, I think we have a demo of that a little bit later, but ultimately, you can customize Frame to have those custom actions that are specific to your workflow. Yeah. Absolutely. Yeah. And we're going to take you through that in just a minute. So now with these documents, it's great to see that we can still create some actions with it. What I'd like to now show you is the extraction of key art. And so by using these multi aspect ratio artboards, we've also contained the content, and we've created variations of that content that are specific to certain aspect ratios. So now if we wanted to reuse that content across perhaps new experiences, new templates, etcetera, we have the ability here of extracting the key art. That one took a little extra second, but the extraction of the key art will actually go and pull all of that variable content that we had within our template in order to be able to reuse it. Specifically, the key art here is interesting because we now are guaranteed. We now know that we have a variation for this specific channel at this specific size, in this case, a square aspect ratio of 1080x1080. And so with these elements, we're just exploring all of the features, all of the capabilities that are possible through automation within a collaborative experience in Frame.io, in order to, as Marcel mentioned, keep our users in the loop. We understand that automation is not always a scenario that happens on its own all the way through to activation, so it's important to make sure we can collaborate with these features.
So now this is great, but let's go ahead and take a look at what we need to do to take these assets through to activation. So we want some final rendered assets. We're going to leverage our last automation here to create renditions. And so in this case, we are no longer going to export Photoshop documents. We're going to create and generate, rasterized outputs, in this case, static images. And these images are now something that we can use to activate in our other platforms. Perhaps we want to publish them to some of our Digital Asset Management systems or get them ready for maybe social media posts. And so now we're dealing with images, PNG, as you can see, and so these assets are now ready to be approved or at least to be reviewed, to later be approved and trigger additional automation. So now if we're happy with these, we can approve these four assets, and again, we have the ability of leveraging automation to take these assets through to activation.
All right. And here we're showing a small amount of assets. Four assets is not that impressive. Right? So you can do thousands of assets, but here we kept it small just so that we don't clutter the screen and everybody loses where we're at. Yeah. Perfect. Back on here. Perfect. All right. So evolving with technology. Right? So basically, what we want to be able to do is start with something simple. Because many of the pitfalls when we work with customers is that they're looking to take their most complex Photoshop document, or Illustrator, or InDesign document, and automate that. Right? Because they want it prove to me that this is going to work. Well, ultimately, you're setting yourself up for failure, because that's not an easy task. Right? So we talked a lot or we didn't talk a lot, but we talked earlier about the fact that it's important that the creatives change their way of thinking and their way of creating their templates and their files, so that they can be automated. So what you want to be able to do is start simple. Start with something that is relatively static. The text isn't going to be too long. Whatever the parameters are, you want to start with something simple. Get those learnings under your belt to get the API calls, get the automation, get the flow nailed. Once you do that, then you can evolve to the next level, which is more of an adaptive relative positioning, leveraging AI to do smart cropping, all those kinds of things where you're starting to do something more advanced. And it's basically a crawl, walk, run approach. Right? Where then you evolve where you can basically say, instead of having the specific aspect ratios and sizes that we had in the PSD document, allow for arbitrary sizes. Right? Maybe somebody can just declare which size that they want, and you can automatically and programmatically do that in the document.
Absolutely. Great. So I think that wraps up our creation stage quite well. We've created content. We've gotten it ready for activation. So now what we'd like to do is talk about automation. The next stage in our journey is unlocking all of these features. We showed you how that happens in a platform like Frame.io. There's many other options to integrate with our automation capabilities. But now we'd like to take it to a slightly more technical portion of the presentation where we're going to show you how that automation is actually happening within our platforms. So we're going to start with something that sounds simple, but that is very important. Exactly. The read before you test. Right? So there are some Adobe specific things the way our APIs work. Right? So IMS, our Identity Management System, is how you get the authorization bearer token, be able to call your API keys. Right? So it's important to understand that because we basically, implemented on OAuth. So you need to understand scopes, making sure that you've got the right credentials for the right APIs, the right scopes are applied to it. And then learn how to generate your tokens so that you can invoke the calls, and work with those. Another very important aspect is to recognize that our some APIs are synchronous, like the Firefly Gen AI APIs. All the other APIs that require more processing power, such as Photoshop and InDesign, are asynchronous. Right? So we'll show you the difference in a minute, but very important to understand. Now with the latest version of the APIs V3, we now have asynchronous capabilities for Firefly as well. So your programming model stays the same, so you can treat everything asynchronously. But very important to understand those two ways of doing it. Absolutely. So the first step, of course, reading the documentation, great first place to get started. But now let's take a look at how we actually test these services. So we're going to show you a quick demonstration of using some testing tools, to get that done. And in this case, we're going to choose one of our favorite platforms to do that, called Postman. So I don't know if there's a developer in the audience that has used Postman, but, yes, a great tool to invoke some of these APIs. So let's go ahead and take you through that. Yeah. And as you set up, I mean, for every demo or POC that we build, we build up all the API calls with the customer assets in Postman. We get that all set up. We get it tested to make sure everything works. All the results are exactly like we expect, then we move into automation. Right? So Gui's going to walk through that particular set of steps, but very important to do that. I think we're back. Perfect. All right. Great. Yeah. So taking you through Postman, let's talk about some of the key features we have here in Postman and how to leverage them to help us get through, all of the various operations we're looking to do. And of course, try to set up our automation. So the first part, we're going to take a look at here in Postman are the use of variables. So we're going to show you how we're using environments, in this case, Firefly Services, to configure variables. We have variables such as the URL and variables such as our authentication parameters. These are authentication parameters that you're going to get through the Adobe Developer Console, but, of course, they're interchangeable and some of these credentials will give you access to certain services, perhaps not others. So it's important to start setting our testing, I want to say our testing tools appropriately to not have to copy a lot of that content around, not have all of these secret tokens being shared everywhere on your desktop and be able to get some interoperability and reuse out of the requests we're using in these tools. So we'll show you what that looks like here, in one of the sample environments we have available. So we have some URLs, we have some client credentials, and a lot of the information that is kept here, and it gives us the ability to toggle through some of these environments and some of these credentials as we start making our requests to Firefly Services.
As Marcel mentioned, authentication is probably the first part in getting started. So in this case, we're communicating directly with our IMS service, passing our client information and our client secret, essentially a password, a protected encrypted password, and that gives us access, it gives us an access token. This token is the token we're going to use to authenticate to all of our subsequent APIs. And so most of the operations we're relying on today are going to use these access tokens. Now as you can imagine, if you do this 100 times in a day, it would be quite painful to copy paste this access token around to all of the requests. So now let's take a look at our second pro-tip here of the platform, which is the use of scripts. So this is where we're showing you a little bit of code. We don't want to go too far into that today, but we're using a small little script here to extract the token from the response we received in IMS. We're going to set a variable with that token, and we're going to reuse that token in our subsequent calls so to make sure that we're always authenticated to our Firefly Services. So if we take a look here at our collection, you'll see some variables being set here at the bottom, and this variable was set programmatically as soon as I invoked the IMS authentication. Now that lets me use it in all of the other requests I'm going to make. And for those of you that don't know, the IMS token is good for 24 hours. Right? So that's why you need to be able to refresh that token. Yeah. Great point. Actually, yes. So it does change quite often. We can't just set a variable and use it forever. That's why we're going to have to continue refreshing that token, and that's a very important security aspect of using the OAuth authentication mechanism. So now with this variable, you can see that we're setting up at a collection level, our token to be used as a bearer type authentication, and that authentication will now be used for all of the requests contained within this collection.
So great. We've authenticated to the service. That is our first operation here. Let's go ahead and start generating some images. So we're going to invoke our Firefly model using the Firefly Services API. It's an important distinction. I know there's a little bit of confusion around what is Firefly, what is Firefly Services, but just to demystify that quickly, Firefly is our set of generative models. Firefly Services gives us access to these models programmatically as well as many of our creative capabilities, such as our InDesign video, or Photoshop APIs. So combining both of these concepts together, we're going to prepare our call to Firefly, Firefly Services to invoke our Firefly model, and we're going to generate, again, the same type of imagery we were just looking at here, in Project Concepts, but this time we're going to do it programmatically. So we have a variable, again, another script typing our prompt here. You can imagine that in an automation perspective, we're not typing prompts into an API call. We're using variable. We're using data that's coming from other systems. We're at times using large language models to take data, context, perhaps user segment information to then programmatically create new prompts that would work well within our audiences and again, enable programmatic ideation. But in this case, we're going to just type a prompt, and we're going to see what we get from that. Yeah. Great example. In some of the POCs that we've built, this year, we actually use a lot of LLMs to be able to take, like, for example, just a location, or a theme of what the campaign is about, and we leverage AI to generate the prompt. So you don't have to be a prompt engineer. You can just have AI do it based on some keywords, that's part of your campaign.
Great. Perfect. So with an image, we saw that looked pretty good, similar to what we saw in Project Concept. We're now going to invoke Photoshop. In this case, we want to show you one of the features in Photoshop that we were just talking about here, which is the asynchronous responses we get from some of these services that are processing a little bit more data, a little bit more content. So using Photoshop, we're going to send a request here to the Get Manifest operation. That's something that helps us understand how our document is composed and what is essentially contained within it. Something a little bit different here. We're not getting a direct response from this API. We're getting a HTTP code 202. I don't want to get too technical here, but that means that this system has accepted the work and it's still working on it. And so we're going to have to go ask the system to see if it's completed, if it's done with this task. And so that's what we'll do here with the URL that is provided, essentially the job reference. We're once again using a script here to set this URL to be able to refer to it in the next call. And then if you've looked at our API documentation, you see that you can now make a request to this URL to find out more about this job, about this task. In this case, we get a successful response from this API, and you can see that the status is succeeded. And so that's an important aspect of working with asynchronous jobs. We may get a successful job that gets started. It does not necessarily mean that the job will complete successfully, or you may be querying the status of that job, and it may still be running. And so if you are trying to develop code, write code, most of the time, we're thinking about it in a synchronous fashion. With every line of code, we move on to the next one. In this case, we're going to likely have to continue querying that service to find out more about the status of the job and in order to automate the next set of operations. And so it is a bit of a mental shift as a developer to make sure that we are coding against these asynchronous patterns.
And here we have it. Yeah. We're showing you the document manifest. If you've not used the get document manifest API call and you're a developer, you need to get very familiar with that. Because you'll be able to introspect the Photoshop document and understand those little placeholders that we had with the special naming. You'll be able to go figure out where those are, and then you can programmatically build your payload to modify the Photoshop document. Again, another difference between the different templating language. Photoshop does have the ability to do a get manifest. InDesign, however, unfortunately, does not. Right? So if you're doing an InDesign document as a template, you as a developer have to be very familiar on what the objects are, what the naming conventions are inside of the document, as opposed to Photoshop, where you could introspect the document manifest and figure that out programmatically. Absolutely. Yeah. So we're going to take a look now at-- Now that we've tested all of our APIs, we're going to quickly take a look at automating these API sequences together. To do that, we're going to use Workfront Fusion. We're going to just really show you quickly the events-driven and user-initiated actions we have there within the platform. Workfront Fusion, for those who are not familiar, is a low-code automation platform. And so it's a great place to get started to start building these sequences of operations, calling these APIs one at a time, or several at a time in order to enable this automation. And using our webhooks and connectors, we're able to accelerate the development of these automated pipelines. And then we're going to show you a quick sample of using a tool that we really liked called JSONata. It is a library that's available in some other products, such as the Document Generation API in the Document Cloud. And one of the interesting tips I want to point out here, even though we want to keep it short, is the fact that when you're dealing with low-code automation platforms, those platforms excel at sequencing these operations and holding the business value, essentially, or the business logic within these operations. They do not excel at heavy lifting of large binary documents. And so pro-tip here, when we are leveraging platforms like Fusion, we are never really processing large binary payloads and files within these workflows. We are making reference to files that exist in our storage systems via things and mechanisms like pre-signed URL. If you're not familiar with that, but if you've ever interacted with a storage mechanism like Azure, binary storage or perhaps Amazon S3 buckets, that is a mechanism we have to keep assets, outside of our automation platform. Perfect. So let's go ahead and show you what that looks like, briefly using that.
And Gui was underplaying that, you could do a little bit of testing with Fusion. As you can see, the total yearly operations, those are the number of flows that we've run. So we're actually showing you a view of our internal version of fusion where we run our own automations.
Yeah. Perfect. So let's take a look at the custom actions we were just invoking in Frame.io from that perspective. So perhaps a little bit intimidating if you're opening Fusion for the first time, but these are the actions we were clicking through here in Frame.io. And so you can see how we were automating a sequence of operations between, in this case, Photoshop, JSONata, adding a little bit of logic, and interacting with Frame.io. That is essentially what powered the features that we just saw and quite frankly, what powers a lot of the interactions we saw today in other sessions or in the last two days over many other sessions in these demos. We quite often use low-code automation to make that possible and put together these system integrations quickly and efficiently. And so the first thing I want to show you here, if I switch to, again, Postman, a little power user feature here in Postman is that I have the ability to get essentially the command the curl command of this request. If you're familiar with it, that means I can invoke this API via command line. But I can also copy that and paste it into Fusion and get an amazing new connector, an HTTP connector, that replicates this API call in my low-code scenario. So you can take a look at it here. This is the same Firefly image generation API call now in my low-code scenario. I'm ready to start using it, at least in the context of this request. But as Marcel mentioned, a small, a gotcha or a pro-tip here is that this request is still using the same authentication token that we created in the first place, so it's only going to work for the next 23 hours. Of course, we don't use this mechanism specifically when automating these sequences, but nonetheless, a very interesting feature, pro-tip of the platform, especially if you're looking for quick iterative processes. Now if we don't want to get that far into HTTP connector configuration, we may also want to leverage the out of the box connectors we have, such as Adobe Firefly and Photoshop, readily available here in the platform. And so all of the parameters we had in the API configuration and those that match the API documentation are available here in a low-code context.
Great. Let's fix up this scenario. And now let's take a look at, again, one of our favorite tools, JSONata. And so this is a small library that's available, an open-source library. The way we use it to keep it short, is almost as a regular expression to transform JSON data. And so that is a mouthful, but what that means is that a lot of the integrations, a lot of the interactions that we perform between APIs are handling JSON data and these JSONata transformations help us move data around, arrange the fields, and prepare those requests to our APIs in order to enable all of that dynamic behavior, such as find all of the replaceable elements in this Photoshop file, go ahead and set all of the data within it, or extract all of the artboards in this Photoshop file by finding their references. Yeah. Super handy in low-code, no code environments. And actually, one of the catch phrases that we use is JSONata to the rescue. Like, there's a lot of times you're going to get caught trying to build something dynamic, and you end up in this cycle of trying to figure out how you're going to pull this thing together, and JSONata really can solve those problems.
Perfect. So here we go. So without going too too far into JSONata, some of the pro-tips we use, in Workfront Fusion and how we enable all of that automation in platforms like Frame.io. And keep checking my watch here. We have one last thing we want to show you. We've shown you collaboration, automation, and we want to show you activation here. So let's go ahead and move on to activation and see what we can do once we've automated all of these various portions. And so I'm going to show you this very quickly here.
Perfect. So from Creation to Activation. Let's talk about some of the functionality we have now with all of the content we've generated. Perhaps we want to publish them to our content management platforms, as we talked about, or maybe bring them into certain platforms like Adobe Express in order to take the final assembled assets that we have, but enable our communicators to perform the last mile changes or the user segment personalization items, that we're looking to accomplish here. So we're going to show you very quickly Adobe Express working with Adobe Experience Manager and how we've published the content using Fusion to Adobe Experience Manager, where it can now be used in Adobe Express. So let's go ahead and do that. Yeah. Because you're going to hear a lot of that. Right? Your marketers, your communicators are going to want to be able to remix content. Right? Like, this campaign worked last year. I love the template, but the images don't work. Or for example, like MAX London versus MAX North America. Right? So we want to change those different images. So by having those assets inside of a DAM like AEM Assets, you can now make those templates, those documents available to marketers to then remix and self-service. Absolutely. So here in Adobe Express, we're moving a little bit fast, but we're going to take our Adobe Photoshop document, the one that we were just automating through some of the sequences and frame. We're actually going to take the source document here. This is still a layered Photoshop file, and we're importing it into Express in order to continue assisting with the edits that we're going to be doing with our communicators. We're taking that document, and we're actually going to lock certain aspects of that document in Express to ensure that some elements don't change, but that our communicators have the ability of replacing other key elements, such as the image. We're now going to take a look at the brand. We've created a brand here in Express for Luma and for these templates. And so that brand is accompanied with some of the content such as fonts and colors, and we're going to save our template to the brand.
We can configure some of the restrictions we have here in Express. We saw that in the Keynote yesterday, actually. And so configuring the template restrictions lets us define what exactly our users are going to be able to do with that template. Once again, we started with a Photoshop template. We're now configuring some of these parameters in Adobe Express, and we're going to save these restrictions and save that template.
Now if we switch roles a little bit and we take a look at a communicator using Adobe Express, we see that we can navigate through to our brand, go find our template, and we can start leveraging it to bring in some of our assets.
One of the really, cool features we're going to show you now is how we've used automation with Frame.io, and we've actually published some of these assets. Once I hit approve in Frame.io, we didn't go through that in much detail, but we actually moved. We published these assets programmatically to Adobe Experience Manager, now getting them ready for activation into our multiple channels. So now using Express with an integration directly into Adobe Experience Manager, I can go select the image that was published automatically to that platform. And as a communicator, I can actually decide what type of content I'm looking to bring in here for these specific channels, make my last mile edits, that I'm looking to do here in Express? And if I want to, I can reactivate this asset, save it back to Adobe Experience Manager, essentially, to be used in other activation channels. And in this case, we're going to enter a little bit of metadata, persist all of that to Adobe Experience Manager that will continue living through with the assets. And as I publish that, again, to Adobe Experience Manager, I'll be able to use that, in platforms like perhaps Journey Optimizer, or even Adobe Experience Manager Sites and be able to leverage certain things like content analytics. So here's my asset published automatically into Adobe Experience Manager, again, getting that ready for activation. Lastly, if I go back to Express, if I'm looking to target perhaps a more instant channel, let's go ahead and publish that to social media. Perhaps that would be a great post for a platform like Instagram or X. And we're going to use a generative feature, again, in Express to generate a new caption, get that post ready for activation to social. And if we're satisfied with that, we can actually go ahead and publish that right away to the social media channels. And so taking you through ideation, automation, creation, and now activation, in under 60 minutes by a few seconds. And so that was a bit of a session. We wanted to cover a lot of content here today. So if we want, let's do a quick, recap and maybe see if we have any questions.
Thank you.
[Music]