#F8F8F8

RESPONSIBLE ENTERPRISE AI AT ADOBE

Setting the responsible AI standard for businesses.

AI development at Adobe is guided by the core principles of accountability, responsibility, and transparency. These values shape every stage of our development process — ensuring that our AI-first capabilities and tools are not only powerful, but trustworthy for even the largest enterprises.

AI ethics principles.

Adobe’s responsible AI development and deployment is based on three key principles. This responsible approach to AI empowers enterprises to unlock creativity, accelerate productivity, and deliver personalized customer experiences at scale — all within a secure, ethical, and unified AI platform.

Accountability.

We take ownership over the outcomes of our AI technology with teams dedicated to responding to concerns. We also test for potential harms, take preemptive steps to mitigate, and use systems to address unanticipated outcomes.

Responsibility.

We thoughtfully evaluate and consider the impact of AI’s deployment. We strive to design for inclusivity and assess the impact of potentially unfair, discriminatory, or inaccurate results that may perpetuate biases and stereotypes.

Transparency.

We’re open about our use of AI and give our customers a clear picture of our AI tools and their application. We want them to understand how we use AI, its value, and the controls available to them with Adobe.

#f5f5f5

How Adobe builds responsible AI.

The 5 pillars of Adobe responsible AI - Training, Testing, Impact assessments, Diverse human oversight, Feedback - surrounding the Adobe logo
Training
Enterprise AI is only as effective and precise as the data used to train it. Results that are considered appropriate depend on individual use cases, which is why we build datasets to meet specific requirements for customer experience orchestration at scale. This ensures we have results that are appropriate for the way the AI will be used.
Testing
Adobe rigorously and continuously tests AI-powered features and products to reduce the risk of harmful biases or stereotypes in AI-driven results. This includes automated testing and human evaluation.
Impact assessments
Engineers developing AI-powered features must submit an AI Ethics Impact Assessment. This assessment is designed to identify features and products that can perpetuate harmful biases and stereotypes. This allows our AI Ethics team to maintain the speed of innovation and stay focused on enterprise-grade features and products while meeting the highest ethical standards.
Diverse human oversight
AI-powered features with the highest potential ethical impact are reviewed by a diverse, cross-functional AI Ethics Review Board. Diversity across several demographics is critical for providing a variety of perspectives and identifying potential issues that could be overlooked.
Feedback
Users can report potentially biased outputs that can then be remediated through feedback mechanisms. Enterprise AI is an evolving technology, and we want to work with our community to ensure our tools and products are beneficial for everyone. If you have a question about AI ethics or want to report a possible issue, please contact us.

Learn more about responsible AI at Adobe.

Content as a Service v3 - Responsible AI - Friday, August 15, 2025 at 15:02

Questions? We have answers.

How does Adobe help ensure AI accountability?

Adobe helps ensure AI accountability by:

  • Establishing governance processes to track training data and AI models, including labeling datasets and models.
  • Requiring an AI Impact Assessment (as part of our services development process) to help ensure an ethics review occurs before we deploy new AI technologies.
  • Creating an AI Ethics Review Board to oversee the promulgation of AI development and to offer a sounding board for ethics concerns, while safeguarding whistleblowers.
  • Developing processes to ensure remediation of any negative AI impacts discovered after deployment.
  • Educating engineers and product managers on AI ethics issues.
  • Offering external and internal feedback mechanisms to report concerns about our AI practices and features.
What does the responsible development of AI mean for Adobe?

Adobe believes that responsible development of AI encompasses the following:

  • Designing AI systems thoughtfully.
  • Evaluating how it interacts with end users.
  • Exercising due diligence to mitigate unwanted harmful bias.
  • Assessing the human impact of AI technology.

Responsible development of AI also requires anticipating potential harms, taking preemptive steps to mitigate them, measuring and documenting when harm occurs, and establishing systems to monitor and respond to unanticipated harmful outcomes.

How does Adobe remediate the output of AI systems for harmful bias, regardless of the input?

As part of developing and deploying our AI systems, Adobe seeks to mitigate bias related to human attributes (e.g., race, gender, color, ethnic or social origin, genetic or identity preservation features, religion or political belief, geography, income, disability, age, sexual orientation, and vocations). We design with inclusivity in mind. We prioritize fairness in situations with significant impacts on an individual’s life, such as access to employment, housing, credit, and health information. We also consider whether the advantages of using AI outweigh the risk of harm from using it.

This notion of fairness doesn’t mean all customers are treated uniformly. Some of the most typical AI use cases segment individuals in ordinary and acceptable ways, such as in demographic marketing or personalized product recommendations. Responsible development of AI means using AI in reasonable ways that accommodate the norms and values of our society.

How does Adobe seek responsibility in their digital media tools?
To address misinformation, Adobe is committed to advancing provenance tools and solutions to bring more transparency and trust to the digital ecosystem. We believe it is critical to give people the information to understand the source and lifecycle of a piece of digital content, including whether AI was used in the editing or creation process.
How does Adobe ensure AI transparency?

Transparency is the reasonable public disclosure, in clear and simple language, of how we responsibly develop and deploy AI within our tools. Adobe values our trusted relationship with our customers — transparency is integral to that relationship.

This transparency includes sharing information about how or whether Adobe collects and uses customer assets and usage data to improve our products and services, as well as general disclosure of how data and AI are used in our tools and services.