AI raises the ceiling on growth. Data determines how high you go.

Emily McReynolds

03-30-2026

We usually notice AI problems at the point of use. But the outcome is determined much earlier — at the data layer that feeds AI-powered decisions.

It’s tempting to focus on the intelligence of the system, but AI acts more like an amplifier: it learns from your enterprise data and scales whatever that data contains. When the foundation is fragmented or low quality, the output is too.

You see it the moment AI moves into live workflows. Personalisation nudges customers to “upgrade” right after they’ve purchased. Segments may look right in reporting, but they misfire in activation because consent and identity aren’t consistent across platforms. Content automation reuses assets beyond their approved market and you notice only after launch.

The costs show up fast. Gartner predicts that through 2026, organisations will abandon 60% of AI projects for this exact reason. Teams that invested months in pilots find they can't scale and the distance between effective AI adopters and everyone else widens.

Closing this gap requires a shift in discipline, moving away from treating data as a static asset to treating it as the primary governor of AI success.

Three actions organisations must get right for AI-ready data.

The gap between recognising the data problem and fixing it can often be bridged by knowing where to focus. The three actions below provide that clarity.

  1. Map your data landscape before AI tries to navigate it.
  2. Label data with context AI needs to act intentionally.
  3. Enforce hygiene through automated gates in AI workflows.

Each of these actions addresses a specific condition AI requires to operate reliably — and each can be approached incrementally, starting with your highest-priority use cases. These aren't new capabilities for most organisations. Marketing and IT teams already work on visibility, labelling and governance in some form. What changes with AI is the standard those capabilities must meet.

Let’s look at each in a bit more detail.

Map your data landscape before AI tries to navigate it.

When leaders say they haven't got the right data for AI, they're often misdiagnosing the problem. In most organisations, the issue isn’t that the data doesn’t exist. It’s sprawl.

Data is scattered across data lakes, asset managers, marketing platforms, shared drives and partner systems. Some data live in formal tools, while some sit in processes that grew organically and now support day-to-day work. What’s missing is clear visibility into where that data is, how it’s connected and what AI can see and use.

The most effective way to gain that visibility is often to work backward from a specific AI-driven outcome, rather than attempting a comprehensive data inventory.

A global consumer brand applied this thinking as it scaled AI-powered personalisation across markets. By focusing first on the customer insights needed to deliver relevant experiences, the organisation was able to identify which data sources mattered most, where information was fragmented across regions and what needed to be connected before AI could operate reliably.

This outcome-driven approach means being able to answer:

This exercise helps build a shared understanding across teams that may be using the same terms differently. For instance, marketing, IT, analytics and engineering might all refer to a “customer profile,” but mean different things. One team thinks in terms of customer relationship management (CRM) records and segments. Another thinks event streams and behaviours. A third focuses on identifiers and permissions. AI doesn’t resolve those differences on its own.

Without alignment on what the data represents, attempts to connect systems for AI break down before technology even gets involved.

The goal isn’t perfect visibility. It's reducing blind spots enough that teams understand what data AI can access and trust. That means documenting at a high level where data lives today and how it supports work.

Label data with the context AI needs to act intentionally.

Visibility tells you where your data is. Labelling tells AI what it means.

Most organisations already label their data, but these systems were designed to help people find things, not to help AI make decisions.

A content asset tagged “Summer Campaign 2025” is easy for a human to retrieve, but an AI tool doesn't know whether it's aspirational or practical, formal or conversational, meant for luxury travellers or families. Without that context, AI can act, but it can't act intentionally.

This “context gap” is a primary reason Adobe’s 2026 AI and Digital Trends research* found only 43% of organisations feel their data quality and accessibility is adequate for AI adoption. To close it, leaders must recognise that the depth of context required changes based on the desired AI behaviour.

  1. AI-assisted search needs labels that help it to find the right asset: author, date, topic and brand category.
  2. AI-powered execution needs labels that help it tailor experiences to the moment: audience intent, tone, emotional context and language.
  3. AI-guided action needs labels that help AI act safely without constant review: usage rights, regional restrictions, expiration and approval status.

But relabeling your entire legacy library at once may not be realistic. You may leverage AI to label or you can follow the same approach we discussed for visibility and focus on a few desired high-impact outcomes to guide your efforts.

For example, if you are pursuing AI-assisted content creation, you may prioritise adding labels for approval status and allowable use cases. Some content would have to be marked as “reference-only,” while other assets may be cleared for adaptation or automated reuse.

This allows AI to meet brand and compliance expectations without constant human review. To ensure AI performs reliably, you can prioritise the three labelling practices listed below:

This approach ensures that the data feeding your highest-priority AI outcomes has the context those systems need to perform reliably. Then as use cases evolve, improvements stay targeted and investment stays tied to value.

Enforce hygiene through automated gates in AI workflows.

Data visibility and labelling can give you clarity, but data hygiene determines whether that clarity holds as you scale.

Many organisations have guidelines in place. They point to governance policies, training programmes and documented best practices, but the problem is consistency. What’s written down often isn’t what happens when teams are moving fast and data gets reuse in new contexts. And without enforcement, governance stays aspirational.


The policy-practice gap.

  • 55% say they have strong AI governance policies.
  • 43% say those policies are routinely followed.
  • 12% consider enforcement mechanisms essential to maintaining trust.

Source: Adobe’s 2026 AI and Digital Trends research.


This divergence shows up in predictable ways as user consent isn’t captured consistently, permissions don’t always travel with the data or data access expands without oversight.

A global consulting organisation faced this issue as its consent data lived across regional systems, forcing teams to manually chase down permissions before each client outreach. Centralising consent and enforcing usage rules through automated gates made activation more consistent and improved engagement.

Regulated industries often have an advantage when it comes to implementing automated decisioning. They have been forced to build tighter access controls and clearer consent practices over time that AI governance now requires across all sectors.

The takeaway is simple: if hygiene depends on people remembering the right thing to do every time, it won’t scale with AI. Effective hygiene requires moving beyond guidance into enforcement through gates built directly into workflows. A few practical guardrails illustrate what this looks like in practice:

  1. Validation at creation: Systems won’t accept incomplete or improperly labelled data.
  2. Provenance tracking: Clear lineage of where data came from and how it’s been modified.
  3. Testing protocols: Proactive assessment for quality issues, bias or harmful outputs before deployment.
  4. Feedback mechanisms: Structured channels to capture quality issues when they surface in practice.

These gates reduce reliance on memory and manual enforcement. They limit regression over time, reduce downstream fixes and support more consistent AI behaviour. Quality becomes automatic when the right behaviour is also the easiest behaviour.

Making data work for AI at scale.

An AI advantage doesn’t come from picking the perfect model or rolling out a single standout pilot. It comes from the conditions you create, so AI can work reliably at scale.

That’s what visibility, labelling and hygiene really give you. Don’t think of these as technical tasks you need to tick. When you get these three things right, AI can act on data that's unified across systems, interpret context with every asset and operate within governance that's enforced as a natural part of daily workflows.

Moving from these strategic principles to a live, operational reality is where the right architecture becomes a necessity. For a broader view of how organisations are navigating this entire shift, I recommend exploring The AI Inflection Point.

In my next post, I’ll tackle the operational side of this transition: how to navigate the adoption curve and organise your teams.

*Note: Statistics referenced in this post are drawn from Adobe’s 2016 AI and Digital Trends research. Select findings are published in the Adobe 2026 AI and Digital Trends Report.

Emily McReynolds is Head of Global AI Strategy at Adobe, where she focuses on enterprise AI adoption. She has over 15 years of experience in data governance, machine learning and AI across technical research and industry, including at Microsoft and Meta.

Having deployed AI at multiple companies, she understands the challenges an organisation encounters in rolling out AI and provides guidance on AI implementation. Emily started coding in HTML and taught people to use computers back when we used floppy discs.

https://business.adobe.com/fragments/resources/cards/thank-you-collections/generative-ai