Accountability.
We take ownership over the outcomes of our AI technology with teams dedicated to responding to concerns. We also test for potential harms, take preemptive steps to mitigate, and use systems to address unanticipated outcomes.
Adobe’s responsible AI development and deployment is based on three key principles. This responsible approach to AI empowers enterprises to unlock creativity, accelerate productivity, and deliver personalized customer experiences at scale — all within a secure, ethical, and unified AI platform.
We take ownership over the outcomes of our AI technology with teams dedicated to responding to concerns. We also test for potential harms, take preemptive steps to mitigate, and use systems to address unanticipated outcomes.
We thoughtfully evaluate and consider the impact of AI’s deployment. We strive to design for inclusivity and assess the impact of potentially unfair, discriminatory, or inaccurate results that may perpetuate biases and stereotypes.
We’re open about our use of AI and give our customers a clear picture of our AI tools and their application. We want them to understand how we use AI, its value, and the controls available to them with Adobe.
Adobe helps ensure AI accountability by:
Adobe believes that responsible development of AI encompasses the following:
Responsible development of AI also requires anticipating potential harms, taking preemptive steps to mitigate them, measuring and documenting when harm occurs, and establishing systems to monitor and respond to unanticipated harmful outcomes.
As part of developing and deploying our AI systems, Adobe seeks to mitigate bias related to human attributes (e.g., race, gender, color, ethnic or social origin, genetic or identity preservation features, religion or political belief, geography, income, disability, age, sexual orientation, and vocations). We design with inclusivity in mind. We prioritize fairness in situations with significant impacts on an individual’s life, such as access to employment, housing, credit, and health information. We also consider whether the advantages of using AI outweigh the risk of harm from using it.
This notion of fairness doesn’t mean all customers are treated uniformly. Some of the most typical AI use cases segment individuals in ordinary and acceptable ways, such as in demographic marketing or personalized product recommendations. Responsible development of AI means using AI in reasonable ways that accommodate the norms and values of our society.
Transparency is the reasonable public disclosure, in clear and simple language, of how we responsibly develop and deploy AI within our tools. Adobe values our trusted relationship with our customers — transparency is integral to that relationship.
This transparency includes sharing information about how or whether Adobe collects and uses customer assets and usage data to improve our products and services, as well as general disclosure of how data and AI are used in our tools and services.