Switching to AI gets you early wins. Designing for the adoption curve gets you ROI.

AI is already being used in your organization, but its integration remains uneven. Some teams are ahead and celebrating wins. Others tried an AI tool, didn’t love it, and went back to the old way. And some are still experimenting, waiting for clearer guidance before they commit.

Most leaders are now trying to manage this adoption gap while also being pushed to show results as AI budgets grow. The pressure is coming from the top. According to BCG, companies will double their AI investments this year, with 90% of CEOs believing AI agents will enable measurable ROI in 2026.

That urgency creates a problem: leaders push for speed and access without clarifying how, where, or when teams should use AI. Teams are encouraged to adopt AI but are left to figure out the details themselves. Adobe’s 2026 AI and Digital Trends survey found that 57% of organizations agree AI is changing roles and workflows faster than employees can adapt.

The result is predictable. People use AI in their own ways to keep moving, and workflows fragment. This is where both risk and ROI start to suffer. Without a shared way of working, teams can’t trust what’s being produced or scale what’s going well. So even AI wins stay isolated instead of compounding.

If you want to move from uneven adoption to repeatable, measurable impact, you need to organize marketing and creative teams around AI in a way that makes usage consistent and accountable. That starts with designing for the adoption curve.

Find your place on the AI adoption curve to move everyone forward.

Many leaders treat AI adoption like a switch they can flip to ‘on’. You pick the tools, roll out access, and assume the organization will catch up.

In reality, AI adoption follows a curve. Early wins come fast, then most teams hit a messy middle where they’re willing but unsure how to use AI safely and effectively in real workflows. What looks like resistance or reluctance is often a lack of confidence.

Knowing where teams are on this arc of change helps leaders design the right next steps instead of expecting everyone to move at the same pace. Moving the marketing and creative teams forward means creating the right conditions through three core shifts:

  1. Redesign work so decision speed keeps up with production speed.
  2. Prevent unofficial AI use by making the responsible path easy to follow.
  3. Build AI fluency to turn a designed process into a lived process.

Let’s examine each shift in more detail.

Redesign work so decision speed keeps up with production speed.

AI doesn’t just change how fast work gets done. It changes how work is distributed. Instead of spending time resizing assets or adapting copy into multiple snippets, teams can offload more of that production work to AI. That means creatives can focus on the next campaign, while copywriters can spend more time shaping the next strategic piece of content.

Once the barrier to execution drops, the leadership challenge changes. The key question becomes whether the organization can absorb an increase in content as AI multiplies drafts and variants, without losing consistency in quality, brand standards, and decision-making.

If the workflow process stays the same, review and approvals get overwhelmed. The same senior people who used to create a few polished pieces each week are now reviewing dozens of AI-assisted drafts from across the organization. As a result, work moves faster at the edges but slows down at the center.

If the workflow stays the same, review and approvals get overwhelmed. As a result, work moves faster at the edges but slows down at the center.

So, you need to redesign that workflow, then organize your team around it. This means mapping the full flow and rebuilding it for AI-assisted work. You'll want to make where AI fits explicit: which steps it handles, which stay human, and where the handoffs happen. This works best when CMOs and CIOs figure it out together, so the workflow is both creative and scalable.

As this workflow evolves, roles naturally shift into new areas of focus:

  • Experts move upstream: Your most experienced people should move upstream to codify their expertise into the templates, guidelines, and prompts that the rest of the team uses. When those guardrails are clear, teams can move without waiting for approval every time.
  • Managers redesign how work happens: In an AI-driven model, a manager’s value is in maintaining the system design. They are responsible for ensuring the team is using the right sanctioned tools rather than improvising with personal accounts, clarifying where decision-making power sits, and removing friction where the technology and human touchpoints intersect.
  • New roles emerge to manage scale: As AI democratizes execution, organizations need people to orchestrate AI-assisted workflows, maintain content systems across channels, and ensure outputs stay aligned with brand standards.

Prevent unofficial AI use by making the safe path easy to follow.

The instinct for many leaders is to view safety and compliance as the brakes on their AI strategy, but they are what allow your team to hit the accelerator. Without clear boundaries, teams either hesitate because they’re afraid of crossing an invisible line, or they experiment freely with unapproved tools and workarounds. Either way, your AI implementation will have a lower chance of scaling.

When you provide a framework where employees can trust AI outputs, you are giving them the confidence to move at the speed of the technology.

To build this model, guardrails must be practical and embedded in daily workflows in three ways.

  1. Build on your context: As we discussed in my previous post, your AI is only as accurate as the data powering it. Building trust starts with grounding AI in your own institutional knowledge — your brand guidelines, content libraries, product information, and customer insights. When AI draws from your specific context, teams can verify where answers come from and trust the output is relevant.
  2. Set clear input boundaries: Teams need clear guidance on what’s safe to use in prompts, what must stay out, and how to handle sensitive or proprietary information. Give simple, role-relevant examples so people don’t have to guess under deadline. The goal is to make safe prompting the default, so people don’t have to make a judgment call every time.
  3. Use risk-tiered output reviews: When AI generates drafts at scale, the key question is how outputs get validated before they go live. You can't treat every output with the same scrutiny. Instead, define what can move with lightweight and automated checks (routine, template-based work) and what requires deeper review (high-visibility campaigns, regulated claims, sensitive audiences).
Diagram of a risk-tiered review for AI content. Low risk content gets automated checks plus spot reviews, medium risk involves expert checks and stakeholder review, and high risk needs deep multi-stakeholder review.

Build AI fluency to turn a designed process into a lived process.

Even with the right tools and guardrails in place, AI initiatives stall when teams lack the skills to operate inside that new system. Many organizations focus on deployment and assume capability will develop naturally as people experiment. When they do invest in enablement, it often shows up as one-off training sessions and generic workshops, which feel like progress but rarely change day-to-day behavior.

AI fluency requires developing practical judgment about when to use AI and when not to, how to guide it toward usable outputs, and how to spot problems before they create rework. As AI handles more execution work, the premium shifts to human judgment, and these capabilities determine whether AI amplifies what teams can do or just adds volume.

Start with role-specific and use-specific enablement. Generic programs fail because different roles need different capabilities. A content strategist building campaign frameworks needs different fluency than a designer creating mood boards. The right approach is to structure enablement around how each role works, which includes:

  • Training for workflows, not features: Teach the specific moments where AI fits into someone's process. Show a designer how AI accelerates concept exploration. Show a marketer how it helps draft campaign briefs.
  • Showing what good looks like: Provide clear examples, before-and-after outputs, and starting points like approved prompt patterns. People learn faster when they can see what quality looks like and compare their work against it.
  • Building judgment into the skill: Fluency includes knowing when to escalate, when to verify, and what risks to watch for. Teach people to recognize when AI output needs refinement, when it's introducing bias or inaccuracy, and when the task requires domain expertise AI doesn't have.
  • Creating a peer champion network: Beyond formal training, make peer learning part of the system. When teams see colleagues using AI on actual tasks, sharing tips, and working through problems, adoption accelerates. Give early adopters a clear role in helping others build confidence and apply what they learn.

Beyond role-based training, treat enablement as continuous. Run live sessions where teams work through real scenarios and support them with a resource hub of approved prompts, templates, and examples people can use under time pressure. Build communities where teams share what works and review outputs together and keep a feedback loop, so training evolves based on where people get stuck. Over time, fluency becomes a shared capability across the organization, not something limited to a few power users.

Move from AI rollout to sustained results.

If there’s one takeaway from this post, it’s that scaling AI is an operating decision. It also helps to be clear-eyed about what happens in the real world. AI doesn’t automatically free people up for more strategic work. In many organizations, it increases expectations and adds new coordination and review work. A recent Harvard Business Review study shows that AI adoption intensifies work, leading to workload creep.

It makes the system you build even more important, because the goal is not to create more activity. It’s results you can repeat and scale.

To manage that, you need to measure adoption holistically and look beyond licenses and logins. Track usage and outcomes at both team and enterprise levels, and pair quantitative signals with qualitative ones like workflow fit, usability, and perceived value.

Adobe’s 2026 AI and Digital Trends survey found that only 44% of organizations have a measurement framework for generative AI, and even fewer (31%) for agentic AI. Without this, it's difficult to know where teams are stuck or demonstrate ROI. For more insights on AI readiness, read the full survey report.

When teams have the right system, they can move faster and scale usage. That success introduces a new operating challenge: how do you maintain quality, consistency, and compliance when the volume of work increases across the organization? In my next post, I'll show how marketing and creative teams can build risk management into their AI operations without slowing the progress they've made.

Emily McReynolds is Head of Global AI Strategy at Adobe, where she focuses on enterprise AI adoption. She has over 15 years of experience in data governance, machine learning, and AI across technical research and industry, including at Microsoft and Meta.

Having deployed AI at multiple companies, she understands the challenges an organization encounters in rolling out AI and provides guidance on AI implementation. Emily started coding in HTML and taught people to use computers back when we used floppy disks.

Let’s talk about what Adobe can do for your business.

Get started