Responsible Innovation in the Age of Generative AI

Image created with Adobe Firefly

Today, it’s hard to go a full 24 hours without reading or hearing something about generative AI. The pace with which this new technology has leapt into the public consciousness is astounding, though quite frankly, justified. Type “3D render of a paper dragon, studio style photography” and you’re instantly offered multiple variations of a portrait of a ferocious origami creature; or combine a few data points with simple instruction and a chatbot can spit out a compelling marketing email. It’s easy to see the power this technology can unlock for individual creators and businesses alike. Generative AI removes barriers between imagination and blank canvas – allowing creators to generate content the way they dreamed it up; it brings precision, power, speed and ease to existing workflows – allowing people to focus on more strategic or creative parts of their work; it can edit copy to match a desired tone of voice, create message variations, or summarize texts in a matter of seconds.

Generative AI opens the door to myriad new possibilities for how we work, play, create and communicate; but it also opens the door to new questions about ethics and responsibility in the digital age. As Adobe and others harness the power of this cutting-edge technology, we must come together across industries to develop, implement and respect a set of guardrails that will guide its responsible development and use.

Grounded in Ethics and Responsibility

Any company building generative AI tools should start with an AI Ethics framework. Having a set of concise and actionable AI Ethics principles and a formal review process built into a company’s engineering structure can help ensure that they develop AI technologies – including generative AI – in a way that respects their customers and aligns with their company values. Core to this process are training, testing, and, when necessary, human oversight.

Generative AI, as with any AI, is only as good as the data on which it’s trained. And even with good data, you can still end up with biased AI, which can unintentionally discriminate or disparage and cause people to feel less valued. Mitigating harmful outputs starts with building and training on safe and inclusive datasets. For example, Adobe’s generative AI model Firefly is trained on Adobe Stock Images, openly licensed content and public domain content where copyright has expired. By training on diverse datasets that have been curated and filtered to remove violent, derogatory and other inappropriate content, models won’t learn from that type of content. But it’s not just about what goes into a model. It’s also about what comes out. We constantly test our models for safety and bias and provide feedback mechanisms for users to report any concerns so we can take steps to remediate them. In addition to training and testing, there are other actions companies can explore to ensure their models are not perpetuating harmful stereotypes or reinforcing existing biases, such as using block and deny lists to restricting certain words from being used in a prompt.


Even when a company is building their generative AI through partnerships and integrations, they can still apply their values to the output of these models. And if the model output is public-facing, companies can always add a human in the loop to ensure the output meets their expectations.

Beyond ethics, enterprises may also want reassurance that the AI model that they are using minimizes their legal risk. For these companies, it’s important to understand that their AI partner has taken the right safeguards to moderate the content being created or, even better, has ensured the model was trained in a safe way in the first place. This can avoid issues ranging from copyright and brand concerns, to images and text that are inappropriate in nature.

Transparency Builds Trust

We also need transparency about the content generative AI models produce. Think of our earlier example but swap the dragon for a speech by a global leader. Generative AI raises concerns over its ability to conjure up convincing synthetic content in a digital world already flooded with misinformation. As the amount of AI-generated content grows, it will be increasingly important to provide people with a way to deliver a message and authenticate that it is true.

At Adobe, we’ve implemented this level of transparency in our products with our Content Credentials. Content Credentials allow creators to attach information to a piece of content like their name, date and what tools were used to create it. Those credentials travel with the content wherever it goes so that by the time someone sees it, they know exactly where it came from and what happened to it along the way. We’re not doing this alone; four years ago, we founded the Content Authenticity Initiative to build this solution in an open way so anyone can incorporate it into their own products and platforms. There are over 900 members from all areas of technology, media and policy who are joining together to bring this solution to the world.

And for Generative AI specifically, we automatically attach Content Credentials to indicate when something was created or modified with generative AI. That way, people can see how a piece of content came to be and make more informed decisions about whether to trust it.

Image created with Adobe Firefly

Respecting Creators’ Choice and Control

Creators want control over whether their work is used to train generative AI or not. For some, they want their content out of AI. For others, they are happy to see it used in the training data to help this new technology grow, especially if they can retain attribution for their work. Using provenance technology, creators can attach “Do Not Train” credentials that travel with their content wherever it goes. With industry adoption, this will prevent web crawlers from using works with “Do Not Train” credentials as part of a dataset. Together, along with exploratory efforts on monetizing style and contributions, we can build generative AI that enhances not just creators’ experiences, but their profiles as well.

An Ongoing Journey

We’re just scratching the surface of generative AI and every day, the technology is improving. As it continues to evolve, generative AI will bring new challenges and it’s imperative that industry, government and community work together to solve them. By sharing best practices and adhering to standards to develop generative AI responsibly, we can unlock the unlimited possibilities it holds and build a more trustworthy digital space.

Image created with Adobe Firefly