Guide
The AI inflection point.
How to adopt AI responsibly in your organization.
The road to responsible AI innovation.
In this guide, learn about the right tools, strategy, and mindset to assess, pilot, adopt, and monitor AI solutions for your organization effectively.
The imperative for adopting AI responsibly today.
As AI becomes an industry-wide catalyst for transformation, executives face unprecedented pressure to innovate rapidly, address competitive threats, and drive operational efficiencies. However, the race to adopt AI within enterprises introduces new risks. Without careful oversight, this rapid AI implementation can lead to regulatory missteps, operational disruptions, and long-term reputational damage. Balancing the drive for speed with responsibility is no longer a trade-off — it is a strategic imperative.
Governing AI and managing its risks often feels like a herculean task. With new guidelines, frameworks, and policies — and evolving international, federal, and local legislation — these complexities can often obscure where organizations should begin and create challenges for stakeholders from the outset. At Adobe, our experience in responsible innovation, grounded in our AI ethics principles of accountability, responsibility, and transparency, has provided us with deep insights into navigating these challenges.
Our experience has shown that while the road to responsible AI innovation may appear daunting, success is within reach with the right tools, strategy, and mindset.
One of the core decisions organizations face as they develop their AI strategies is whether to build, buy, or customize an AI solution, or combine all three. The following approach draws on organizations aiming to buy AI solutions and aims to build on existing values and business practices for organizations seeking to source AI solutions externally — meeting stakeholders where they are today. Grounded in independent research and informed by expert interviews on AI governance, the framework offers an actionable path forward, helping organizations assess their current standing and providing best practices for embedding responsible AI principles across the enterprise. It includes practical steps for establishing employee generative AI use guidelines, evaluating vendors through robust questionnaires, and updating governance processes for AI to keep pace with the evolving landscape.
No matter where you are on your AI journey — whether assessing your organization’s AI readiness or refining existing strategies — this framework provides a proven approach, blending human ingenuity and cutting-edge AI governance to scale responsibly. By following this roadmap, organizations can assess, pilot, adopt, and monitor AI solutions effectively, building a resilient foundation that fosters trust, mitigates risk, and drives sustained business value.
Framework for building a scalable and ethical AI future.
Successful generative AI implementation requires more than a checklist of actions — it requires a strategic, layered approach where each phase builds on the last, creating a foundation for sustainable innovation and ethical AI practices. This framework serves as a series of interlocking building blocks, designed to integrate responsible AI practices at every stage — from assessing organizational readiness to scaling effectively and continuously monitoring AI systems.
Rather than seeing AI adoption as a process-driven exercise, this framework focuses on building systems that evolve in harmony with your organizational needs. It emphasizes the critical balance between human oversight and advanced AI technology, ensuring that organizations can leverage AI’s potential while also aligning with ethical, regulatory, and operational goals.
Each phase within this framework — readiness assessment, responsible piloting, scaling adoption, and ongoing monitoring — supports long-term success, as integrated pillars that reinforce one another at every step. By embedding responsible AI practices at each phase, companies can navigate the complexities of AI adoption while fostering trust, transparency, and accountability.
Grounded in expertise and informed by research.
Adobe engaged an independent research firm to survey generative AI adoption, gathering insights from over 200 IT, organizational, and compliance leaders across diverse industries. The research highlights current practices, challenges, and successful strategies in AI adoption. Additionally, Adobe conducted in-depth interviews with industry experts and reviewed global standards, including the European Union AI Act, NIST AI Risk Management Framework, Singapore’s AI Verify, IEEE Standard 7000, and ISO 42001. These efforts ensure that the framework is applicable across industries and organizational sizes, regardless of AI adoption progress.
1. Assess: Organizational readiness and selecting responsibly built AI technology.
Evaluate organizational readiness.
While many organizations have initiated AI adoption, of those surveyed, only 21% have fully developed their responsible AI priorities, with 78% still in progress or in the planning stages, underscoring a clear need for readiness approaches. Leaders in IT, compliance, risk management, and strategy are essential to build a foundation in responsible AI. This begins with a comprehensive review of the organization’s governance frameworks and AI literacy to identify gaps that could impact AI adoption.
Organizations need to take a holistic approach to evaluating AI readiness, blending top-down leadership initiatives with bottom-up feedback from employees who engage with AI daily.
Actions for readiness:
- Conduct a comprehensive readiness audit: Evaluate the organization’s technical infrastructure, governance standards, AI-related policies, responsible innovation frameworks, and compliance practices to identify strengths and areas for improvement — ensuring alignment with both strategic goals and the demands of adopting AI responsibly.
- Identify and address key gaps collaboratively: Document additional AI policy needs in security, privacy, legal, compliance, and transparency standards while engaging cross-functional teams — including IT, legal, compliance, and business units — to prioritize actionable next steps.
- Establish and empower governance teams: Designate teams to oversee AI governance, ensuring compliance with both internal responsible AI standards and external regulatory frameworks — equipping these teams with the authority and resources to proactively manage risks and adapt to evolving requirements.
Select AI technology that is built responsibly.
Begin with a thorough review of your company’s existing governance standards. These standards likely already encompass key areas such as privacy, security, accessibility, and legal considerations. Global benchmarks like the General Data Protection Regulation (GDPR) and AI-specific frameworks are part of maintaining compliance and risk oversight in many organizations. Additionally, regional policies and industry-specific standards — such as AI audits and responsibility standards — should be incorporated into governance standards.
Once your organization has outlined the responsible AI expectations and governance frameworks, the next step is to establish selection criteria for responsibly built AI technologies. These criteria should integrate the existing standards and focus on unique elements tied to generative AI, such as transparency of origin, accuracy of outputs, training data licensing, bias mitigation, and cultural localization.
According to the research findings, the top criteria organizations use when assessing generative AI technology include:
1. Training data evaluation (72%)
2. AI use disclosures (63%)
3. Harm mitigation (60%)
4. Transparency of origin (55%)
5. Bias mitigation (50%)
Organizations should develop targeted selection criteria that align AI solutions with both strategic business objectives and responsible AI principles. These criteria emphasize:
Transparency
Ensuring that AI processes are explainable and traceable.
Accuracy
Maintaining high standards for data fidelity and predictive reliability.
Cultural localization
Adapting AI systems to respect diverse cultural and regional contexts.
Bias mitigation
Actively reducing biases to support fair and equitable AI outcomes.
Assess summary
Step 1: Evaluate organizational readiness.
- Define and communicate the company’s standards on the responsible use of technology, inclusive of AI.
- Ensure CIO and/or cross-company committee review current systems and business processes to identify areas that will benefit most from responsible AI adoption.
- Aggregate input from internal business and functional leaders on additional use cases to consider for responsible AI adoption.
Step 2: Select AI technology that is built responsibly.
- Review existing governance standards across privacy, security, accessibility, and legal for AI considerations.
- Develop selection criteria that integrate the previously established standards, meeting responsible AI expectations by focusing on transparency, accuracy, bias, cultural localization, and compliance.
- Evaluate and select AI technologies that best meet the established criteria and business needs, documenting the decision-making process.
2. Pilot: Identifying and piloting high-impact use cases.
The pilot phase bridges AI experimentation with operational reality. This phase allows key stakeholders to evaluate the technology’s performance in terms of how it aligns with business objectives and responsible AI goals. It goes beyond testing for technological feasibility, focusing on enabling key leaders and stakeholders to engage directly with the technology in meaningful ways. It is about enabling people to work with AI, make informed decisions on its scalability, and ensure that it meets ethical, operational, and regulatory standards.
Piloting gives your organization the chance to stress-test AI systems in context, helping you understand where accountability assessments and transparency documentation may be needed, as well as the performance of new capabilities relative to expectations. By documenting insights and gathering actionable learnings, organizations establish a roadmap for scaling AI responsibly, creating a foundation that supports both immediate and long-term objectives.
Identify and prepare priority use cases.
Developing a compelling AI business case includes engaging key stakeholders and front-line employees to provide a holistic view of AI’s potential. By involving those who will directly interact with the technology early on, you can identify high-impact use cases where AI delivers tangible benefits, such as in marketing content creation, coding, workflow automation, and data management.
- Be specific. Focus on processes, not roles: Rather than framing use cases around specific roles (for example, ‘AI for developers’), focus on processes that AI can streamline and improve, such as ‘AI-assisted coding for automating routine code reviews and error detection’.
- Establish measurable metrics for usage and cost savings: While ROI is important, AI pilots should also emphasize broader returns such as productivity, speed to market, employee satisfaction, and enhanced customer experiences — metrics often referred to as ‘Return on Experience’.
- Elevate impact beyond immediate gains: Position the AI initiative as a driver of long-term transformation. Use cases should not only address immediate operational needs but also align with strategic goals like digital transformation or competitive differentiation.
Pilot against business and responsible AI criteria.
Evaluating pilots through a dual lens — business performance and responsible AI criteria — ensures that AI initiatives meet both operational goals and responsible AI benchmarks. Over half (54%) of the organizations surveyed have established an acceptable risk level for their priority use cases. Ensure your organization documents these evaluations systematically, capturing learnings to inform future AI projects. This structured approach builds a robust foundation for scalable AI implementations.
Actions when piloting:
- Set business and responsible AI benchmarks: Define both operational goals (for example, productivity and cost savings) and responsible AI metrics (for example, transparency and fairness).
- Establish risk thresholds: Set risk parameters and create a framework for ongoing assessments to manage and mitigate AI-related risks effectively.
- Capture and share learnings: Develop a standardized process for documenting pilot outcomes to support transparency and guide future scaling efforts.
Pilot summary
Step 1: Identify priority use cases.
- From existing business priority use cases, identify 2-3 pilots where AI ethics and responsibility are important.
- For these use cases, establish metrics and thresholds to track both business and responsible AI performance.
Step 2: Pilot against business and responsible AI criteria.
- Execute pilots with additional technical, business, and responsibility validation and testing as needed.
- Evaluate pilot outcomes against pre-defined metrics and thresholds for business and responsible AI expectations, documenting learnings into future assessment and testing approaches.
- Advance to procurement/adoption based on pilot outcomes and insights.
3. Adopt: Integrating AI responsibly across the organization.
The adopt phase marks the transition from pilot to organization-wide integration. This phase focuses on transitioning from experimental applications to fully operational AI systems — deploying AI responsibly while embedding the lessons learned from pilots into real-world practices.
In this stage, your employees take active ownership of AI’s role within their existing workflows. By building on their practical experience from the pilot phase, employees are equipped to drive AI adoption.
Train and enable the organization.
Scaling AI effectively requires a knowledgeable workforce that understands both the capabilities of AI and the ethical responsibilities that come with its use. Tailored training programs should help employees across roles and departments leverage AI tools. Many organizations (89%) recognize the importance of training, with nearly two-thirds including responsible AI guidelines. Training should integrate technical capabilities with principles of accountability, transparency, and regulatory compliance.
Actions when training and enabling:
Align training with governance: Incorporate responsible AI guidelines into training materials to ensure that employees are aware of compliance, risk management, and transparency requirements.
Customize training to roles: Develop tailored training modules addressing the needs of specific functions, including best practices for both business and responsible AI.
Deploy with responsibility in mind.
Adopting AI at scale requires establishing a governance framework that ensures responsible use. Ensure you align your AI initiatives with existing governance policies, while continuously refining them to meet evolving regulatory, operational, and responsible AI standards.
Actions for deploying responsibly:
- Foster a culture of accountability: Encourage teams to understand the broader impact of AI on both operational workflows and stakeholder trust, instilling a sense of responsibility at every level.
- Continuously evolve training: As AI governance frameworks evolve, update training programs to reflect new best practices and regulatory changes.
Adopt summary
Step 1: Train and enable the organizations.
- Develop use case-specific guidelines for how and when to use AI with responsible AI principles.
- Deploy comprehensive training to relevant groups as solutions are rolled out, inclusive of best practices.
- Celebrate and share wins across the organization.
Step 2: Deploy with responsibility in mind.
- For each technology, codify core adoption requirements (for example, business impact, ease of integration, and risk mitigation).
- Work closely with business leaders to align on trade-offs, where needed.
- Embed key AI and responsibility considerations into existing governance frameworks (for example, access and control roles).
4. Monitor: Continuous oversight and improvement.
Monitor performance against business and responsible AI benchmarks.
Best-in-class monitoring of technology outcomes combines automated performance tracking with human expertise. While many organizations have adopted real-time monitoring tools to assess AI system performance, the effectiveness of these tools improves when human oversight is included. Human teams are better equipped to analyze data, spot risks, and make informed decisions about necessary adjustments.
While 69% of organizations use real-time monitoring tools, these are significantly more effective when combined with human judgment. Many organizations prioritize technical metrics, with 72% focusing on accuracy and 69% on ROI, yet responsible scaling also requires attention to ethical dimensions. Embedding human oversight ensures transparency and predictability, building trust internally and externally. Monitoring further enables proactive bias detection, with 49% of organizations tracking this metric, and 33% monitoring for harmful outputs. Without consistent, proactive monitoring, AI systems could compromise both integrity and trust. By refining performance monitoring to address both technical and ethical risks, you protect your brand, build user confidence, and lay a resilient foundation for scaling AI responsibly.
This collaboration between technology and people helps identify potential risks early, such as data inaccuracies, emerging biases, or compliance lapses.
Ensure ongoing risk management.
Risk management in AI is a continuous process that should evolve alongside AI systems. Establishing a structured, cross-functional approach to AI risk management enables your organization to proactively address both business and reputational risks. Scheduled reviews should involve stakeholders from across the organization, including data scientists, business leaders, and legal/compliance officers, ensuring a thorough evaluation of both technical performance and responsible AI goals.
AI risk management is a continuous process that evolves with technological advancements. 60% of organizations involve data and governance teams, and 49% include AI committees, compliance, and legal teams — highlighting the need for cross-functional collaboration. This proactive approach helps stay aligned with both internal values and external expectations. The ‘why’ is about building resilience that adapts to regulatory shifts. With 68% of organizations emphasizing responsible AI in risk management, comprehensive documentation and ongoing risk assessments are essential.
Key actions in risk management:
- Establish cross-functional risk reviews: Create regular risk assessments involving data scientists, compliance officers, and legal experts to identify emerging risks based on real-time data.
- Track and report consistently: Gather continuous feedback from employees and end-users to detect usability issues, biases, or unexpected behaviors.
Monitor summary
Step 1: Monitor performance.
- Define and track longer-term AI performance metrics, inclusive of business and responsibility goals.
- Establish ongoing review and discuss findings to continuously improve business performance while safeguarding your organization’s responsibility principles.
Step 2: Deploy with responsibility in mind.
- Designate and empower roles to track evolving AI regulations and standards (for example, Singapore AI-Verify, US Congressional proposals, etc.) and ensure company standards are updated accordingly.
- Develop a process for continuous identification and mitigation of risk associated with the use of AI.
- Update documentation as to how your organization is adhering to company standards.
ADOBE CASE STUDY
Internal use of generative AI.
At Adobe, we see generative AI as a transformational technology with the power to enhance human creativity, not replace it. We encourage responsible exploration of generative AI technology internally, aligned with our own AI ethics principles of accountability, responsibility, and transparency.
In June 2023, Adobe established a cross-functional internal working group sponsored by the CIO and CHRO to help employees navigate the exploration and use of generative AI within Adobe in a safe, responsible, and agile way. This group, working with leaders and subject matter experts across the company, focuses on guiding a thoughtful approach to grassroots employee experimentation by understanding the landscape of hypotheses for generative AI use, establishing appropriate guidelines, and streamlining experimentation. The initiative has formed four persona-based working groups representing generative AI use cases across Adobe and has established an intake process, a generative AI risk tolerance framework, and a blueprint for use case review, which takes into account evolving ethical, security, privacy, and other legal considerations. A list of approved generative AI tools and models based on specific use cases, and guidelines for employee use of generative AI are also available. The rollout of vendor generative AI guidelines in March 2024 included training sessions on using generative AI and the features in chosen applications.
Implementation of this initiative has helped streamline the process for faster experimentation, scaled application where possible, and enabled assessment of the company-wide generative AI landscape. Adobe continues to foster shared learnings and insights across the business to create a collaborative ecosystem for collective exploration. The program continues to evolve with the expansion of generative AI in our own applications, proliferation of generative AI technology and models, as well as evolving legal and regulatory guidance. Experimentation review is monitored — the team is developing post-approval experiment tracking, including for those experiments that go into production for scale.
Best practices for responsible AI adoption.
To adopt, monitor, and optimize AI systems responsibly, your organization should focus on several operational areas: Providing comprehensive employee guidance, rigorously evaluating vendors, and establishing robust AI governance levers. By doing so, you can ensure that your AI initiatives not only meet evolving regulatory standards but also build on existing governance and risk-management work while aligning with practices that foster trust, transparency, and accountability.
This section outlines practical steps to embed these best practices into your day-to-day operations.
Employee use guidance.
Tailoring AI usage guidelines to fit the specific needs and risks of your organization is essential for ensuring responsible deployment. These guidelines should help employees navigate regulatory standards and governance protocols, aligning AI technologies with commitments to data security, transparency, and accountability.
Data sensitivity:
Clearly specify when data processing should occur locally or under strict access controls to prevent unauthorized access. This also means refraining from using prompts that could generate or manipulate sensitive outputs. This guideline protects proprietary information and maintains compliance with data privacy regulations.
Transparency in AI usage:
Readily disclose the involvement of AI, such as when it is used to create internal documents, customer-facing interfaces, or external communications. This practice fosters accountability and maintains trust in the authenticity and reliability of AI-generated content, maintaining a company’s brand and reputation.
Account management policies:
Establish explicit policies for the use of generative AI tools that require account registration, including defining whether organizational email accounts can be used, specifying which tools are approved for business purposes, and discouraging the use of personal accounts for work content. This safeguards against unauthorized use and helps maintain alignment with the organization’s broader information security program.
By tailoring these guidelines to your organizational context, employees can navigate the use of generative AI tools confidently and responsibly, contributing to an environment where innovation and integrity go hand in hand.
Vendor evaluation: Sample questions.
Evaluating AI vendors requires informative questions and understanding what answers you are looking for to ensure that their systems adhere to responsible AI, legal, and regulatory standards. The following questions are designed to provide a baseline assessment, enabling organizations to make informed decisions about potential partnerships and mitigate risks associated with AI adoption.
AI governance levers.
Implementing robust AI governance ensures that AI systems are developed, deployed, and monitored in a way that aligns with organizational values and regulatory standards. There are many regulatory standards in place, including the European Union AI Act and frameworks such as Singapore’s AI Verify. In the U.S., companies should follow state-comprehensive privacy laws and the NIST AI Risk Management Framework, as it is the probable foundation of future regulation.
The following governance levers can help organizations manage AI risks and enhance transparency, accountability, and security.
Responsible implementation builds responsible innovation.
Maximizing the potential of AI in a company requires sourcing responsibly built technology, establishing clear usage guidelines, developing dedicated training, and deploying strong governance. This approach drives business value and ensures that AI initiatives meet regulatory expectations and uphold responsible implementation standards, embedding a culture of responsible AI throughout the organization.
Consistent oversight and adaptation keep AI initiatives on track. By defining performance metrics, conducting regular evaluations, and proactively managing risks, your organization can stay ahead of regulatory shifts and maintain the integrity of your AI projects.
Moving into the future with AI, this framework equips your organization to take a leading role within the responsible AI landscape. With a focus on impact, integration, and integrity, this approach paves the way for sustainable innovation and enduring success.
Recommended for you
https://main--bacom--adobecom.hlx.live/fragments/resources/cards/thank-you-collections/generative-ai