How agentic AI is fueling smarter testing and growth.

The most advanced brands today aren’t just running more tests — they’re orchestrating AI agents to guide faster decisions and demonstrate clear ROI.

Experience optimization is a core differentiator for brands, and expectations are rising across product, growth, and experimentation goals and KPIs. Teams are being asked to move faster, prove impact sooner, and clearly connect testing outcomes to revenue and business growth.

As data complexity increases and AI reshapes how decisions are made, many organizations are realizing that traditional approaches simply aren’t enough — this represents a major opportunity to infuse AI-led intelligence, automation, and productivity into how experimentation is planned, executed, and scaled.

It’s become clear that experimentation now requires more than better tooling — it demands embedded intelligence that can understand, adapt, and guide teams. Agentic and generative AI are powering this new layer of reasoning. They enable autonomous systems that surface contextual insights on what is and isn’t working. They can also generate new test ideas, interpret results, and activate adaptive experiments with human-in-the-loop workflows. This represents a major shift from isolated A/B tests toward an evolving experimentation ecosystem that spans the business, continuously learning from each customer interaction and orchestrating smarter optimizations.

This guide explores five critical challenges organizations face when scaling experimentation in the next era of growth — from how teams prioritize and align tests to how they operationalize adaptive experimentation at scale. Each section offers practical insights about how to use agentic and generative AI to make experimentation smarter, faster, and more connected to the outcomes that matter most.

CHALLENGE 1

Identifying how to prioritize team testing strategies effectively.

Many organizations invest heavily in experimentation with frequent testing. Yet meaningful outcomes can be challenging to uncover, buried in disparate systems and spreadsheets or hidden within the knowledge of siloed teams. As multiple business units ramp up their experimentation efforts across separate teams, the sheer volume of data grows fast — but actionable insights and proven wins are slow to surface. Teams running tests on content, offers, or engagement metrics end up spending too much time searching for what was tested and where, who owns it, and what the key learnings were from past experiments.

When experimentation is time-intensive and fractured, the momentum and impact of testing stalls. Without a clear view of historical experiment details and learnings, teams consistently struggle to relearn outcomes, avoid duplicate efforts, and understand how to prioritize testing strategies or what action to take next. The amount of data available is abundant, yet insights and direction on strategy and ROI are scarce.

Strategy map

Created by Speero and inspired by Elena Verna’s growth levers and motions.

Growth motions
Growth levers
Product-led
Marketing-led
Sales-led
Interest

% of progression or abandonment rate

E.g., viewed pricing or abandoned cart

Landing page engagement
% of sessions that inquired
Acquisition
% of new visitor conversions or
sign-ups
Cost per conversion (segmented)
% of sales or marketing qualified leads
Monetization

Revenue per visitor with AOV or units per transaction

(free to paid tiers ratio)

Customer acquisition cost (segmented)
% of leads to customer
Retention

% of active users

Ratio of new vs. returning customers

Conversion of existing customers (segmented)
Customer renewal or churn rate

By mapping experiments to measurable growth motions across product, marketing, and sales, brands can begin to align priorities for testing strategies across several growth levers to drive business impact.

Source: Speero and Elena Verna

With the prevalence of data and experimentation, product and growth teams are emphasizing the importance of using AI to help analyze test data at scale. This is done across an extensive knowledge base of historical experiments and outcomes — surfacing what is and isn’t working to improve performance.

Here are the four steps to build a strong foundation for AI-led experimentation:

1. Centralize experiment data.

Bring all tests — past, live, and planned — into a shared experiment inventory and workspace. It is important for teams to organize and tag their experiments by owner, audience, KPI, and product surface. This indexing ensures every experiment across the business can be easily discovered and shared — including what’s already been tested, which tests were a success, and what was learned. This helps prioritize overall experimentation strategies effectively.

2. Use AI to uncover experiment insights.

Advancements in generative and agentic AI are increasing time to value by surfacing the data behind experiments that are driving higher conversions or revenue lift. Historically, teams had to manually sift through data or engage a sophisticated data scientist or someone with a background in statistics to run deeper analysis. This was the only way growth, product, and marketing teams could understand why an individual treatment or variant of an experiment worked best — based on audience, content, and the underlying test behaviors for success.

Adobe Journey Optimizer Experimentation Accelerator dashboard showing experiment results and AI-generated optimization recommendations. Adobe Journey Optimizer Experimentation Accelerator uses statistical and generative AI models for AI experiment insights that show what works within experiments — highlighting impact on core metrics once an experiment has reached statistical significance.

3. Connect test data and journey analysis.

Experiment analysis and feedback loops require a solid understanding of omnichannel behaviors across branded content, product pages, mobile app engagement, and each customer touchpoint. Brands are looking for attribute conversion, drop-offs, and behavioral changes that are tied directly to experiments running within different channels and steps within the overall customer journey. For example, experimentation, growth, and product teams can use a solution like Adobe Customer Journey Analytics to set up success metrics for test measurement within Adobe Journey Optimizer Experimentation Accelerator to perform deeper analysis.

4. Build routine to improve experiment hygiene.

Greater efficiency and accuracy for AI-led experimentation outputs start with treating every test as a learning system, not a one-off activity. It’s important that experimentation solutions clearly define, tag, and track hypotheses across experiments to help with AI reasoning and insight validation. This creates a continuous growth loop where each test result informs the next hypothesis — accelerating the quality and pace of future test ideas.

With the rise of agentic and generative AI, experimentation is no longer a manual hunt for isolated wins. It can be a connected system with AI agents that continuously surface insights across all past and active experiments. This can reveal valuable causal patterns and cross-journey influences that are often missed or undiscovered. This new era of experimentation is being driven by insights from technology like Adobe solutions that fuel the next action — accelerating decision-making and transforming fragmented test data into a self-learning growth engine.

CHALLENGE 2

Ensuring hypotheses align closely with strategic growth goals.

Growth and experimentation teams are under constant pressure to deliver business impact — but too often, teams struggle to identify new, business-aligned hypotheses that truly grow the business. Many programs rely on individual ideas disconnected from broader business objectives like revenue growth, retention, or cross-channel engagement.

While experiments may run frequently, they often lack the contextual intelligence to answer why a particular variant worked — or how that learning should inform the next set of tests. Without visibility across journeys, campaigns, and audiences, teams can miss the deeper insights that connect experimentation to business strategy.

Adobe’s product and growth teams have shown that organizations using AI-led experimentation and analytics workflows can increase the average annual recurring revenue (ARR) impact by over 200% per experiment. Using agentic and generative AI to inform new test ideas, brands can quickly discover the highest ROI from experimentation to strategically impact conversion and revenue growth.

Traditional experimentation process:
End-to-end experimentation can be resource intensive, inconsistently successful, and slow — taking weeks to months.
Traditional experimentation process showing four stages taking days to months, with pain points faced during each phase.
Traditional experimentation process showing four stages taking days to months, with pain points faced during each phase.
Traditional experimentation process showing four stages taking days to months, with pain points faced during each phase.
AI-led experimentation process:
Get inspired and scale faster with AI-led experimentation.
AI-led experimentation process showing four stages with benefits — creating a continuous growth loop from ideation to action.
AI-led experimentation process showing four stages with benefits — creating a continuous growth loop from ideation to action.
AI-led experimentation process showing four stages with benefits — creating a continuous growth loop from ideation to action.
Here’s how to begin using AI-led experimentation to help define testing strategies:

1. Align experiments to business outcomes first.

Start by framing experiments or upcoming test backlogs around defined and agreed upon business objectives — whether it’s average order value, conversion lift, or reduced churn. Teams should anchor every test to a measurable KPI and outcome — ensuring each AI experiment insight or generated test idea contributes directly to revenue, retention, or engagement metrics. Adobe Journey Optimizer Experimentation Accelerator is designed to enable experimentation teams to set up and run tests in this manner.

2. Accelerate innovation from existing test data.

As brands look to AI for more agile decision-making, uncovering actionable growth opportunities hidden within test datasets is an area of high ROI. AI agents can analyze both historical and live experiment data to identify connections to certain test behaviors and highlight areas for learning potential based on specific content or audience segments that teams have not yet run an experiment on — in addition to estimated conversion or revenue lift.

3. Map causal impact with unified analysis.

As organizations establish and mature in their experimentation practices, their focus should shift from isolated channel tests to expanding their knowledge on connected experiences that drive overall business impact. Teams are increasingly adopting ways to understand causal impact analysis on changes across channels and journeys. This includes bringing in any data source — online or offline — into a unified source of analysis. For example, teams may need to identify how an experience improvement in one area — like email or web — influences downstream conversion or retention.

4. Automate learning cycles for continuous innovation.

Every completed experiment should feed into a shared knowledge base — enabling AI to refine hypotheses, detect new opportunities, and recommend the next phase of experimentation. This continuous feedback loop transforms experimentation into a living, adaptive strategy that evolves with your business priorities and optimization goals.

The future of experimentation is being driven by AI and transforming scattered insights into business acceleration. By unifying learnings, mapping causal impact, and automating discovery and analysis, organizations can turn every test into a source of smarter decision-making.

CHALLENGE 3

Expanding beyond traditional testing methodologies using the latest AI advancements.

Many organizations still rely on traditional testing methodologies, which enable optimization as a single-step event rather than a continuous optimization cycle across the end-to-end customer journey. Experimentation tools in the market today are often designed to optimize landing pages, form completions, add-to-cart clicks, or email subject lines. The reality is that brands struggle to optimize other important actions across the entire customer lifecycle — from acquisition to engagement and retention.

Generative AI can support advanced content supply chain workflows to create content with on-brand guidelines — but experimentation teams are still challenged to increase the number of tests and the velocity of their programs. Even if content bottlenecks are resolved, other constraints emerge — like standard test validation methods for sample size and traffic volumes that are needed to reach statistical confidence. Without an increase in visitor traffic across channels, teams are unable to activate vast amounts of content generated for new experiment treatments or variants.

To truly accelerate testing for greater business impact, experimentation is evolving from single channel tests to AI-led experimentation methods with human-in-the-loop workflows across the full customer lifecycle.

Here’s how leading teams are using AI to evolve their experimentation programs:

Expanding from channel-specific tests to journey-wide optimization.

Across the marketing technology industry, experimentation is evolving from siloed channel testing to full-journey optimization. Brands who are leaders in experimentation are moving beyond single touchpoint or channel metrics. They are uniting product, growth, and experimentation teams using unified customer profiles and real-time behavioral signals to run omnichannel tests. This enables cumulative impact across journey-wide optimization and brings together several different business units running experiments.

It is critical to continuously optimize experiences across the end-to-end customer journey. According to Adobe, 78% of consumers expect a seamless experience across digital and physical channels, yet only 45% of brands meet this expectation.

Engaging in conversational experiences for experimentation workflows.

AI assistants and conversational interfaces make experimentation more intuitive and accessible. More investments are being made into AI tools that leverage natural language processing (NLP) in real-time interactions — in addition to surfacing experiment insights or suggesting next actions for test ideas. Conversational interfaces lower the barrier to experimentation — turning large amounts of data and analysis into dialogue — and can improve learning cycles by bringing AI directly into the creative and strategic decision-making flows.

Activating adaptive experiments for rapid test iteration.

Generative AI has significantly advanced content velocity for new test ideas, yet experimentation teams are still limited by traffic volume and statistical rigor. A new experimentation method of applying adaptive iterations to active tests is unlocking faster and smarter AI-led test variant generation — all while reducing sample size needs and maintaining statistical confidence.

Bringing together growth marketers, product managers, and analysts.

Business trends in driving product-led growth (PLG) freemium activations or product-led sales upsell motions have helped elevate experimentation as a core initiative to rapidly test and optimize growth goals. Large scale experimentation programs need to centralize workflows across each stage of ideation, content creation, test execution, and measurement. A key differentiator for cross-team experimentation analysis includes unified metric contribution across channels and testing to measure downstream impacts on revenue, customer lifetime value (CLTV), or customer acquisition costs (CAC).

Accelerating confidence with simulated lift forecasting.

AI has proven valuable in analyzing experiments to reveal insights into why an experiment did or didn’t work. Another area of focus is how generative and agentic AI can help predict the performance of generated test ideas. AI models can forecast potential lift, confidence intervals, and optimal test opportunities before rolling out. By simulating estimated experiment outcomes, teams can use AI to help identify the test ideas with low-traffic or statistical-power limitations, to focus resources on the highest-value opportunities.

Experimentation is entering a new era — one defined by continuous optimization, contextual insights, and human-in-the-loop collaboration with experts across multiple business units. As teams embrace AI-led experimentation, they can move beyond the manual analysis and slow development cycles of the past to a more powerful and adaptive system that learns, evolves, and drives measurable impact across the entire customer lifecycle.

CHALLENGE 4

Scaling and operationalizing experimentation lifecycles across teams.

As experimentation programs grow, so do the dependencies — from data collection and analysis to design, deployment, and insight sharing. Without shared visibility or coordinated workflows, organizations face mounting friction between teams — resulting in slower test cycles, inconsistent learnings, and duplicated effort. Scaling experimentation requires more than new tools and technology — it demands organizational buy-in and collaboration that can streamline how ideas move from hypothesis to insight to action.

True scale comes when experimentation is owned and run by more than one team and becomes embedded across the organization as a core differentiator for the business. That means aligning stakeholders around common goals, establishing transparent workflows, and empowering practitioners to act on shared insights. When teams can see the full lifecycle of testing — from idea creation to measurable outcome — they collaborate more effectively and move faster from learnings to value.

Here are some examples of how organizations today are breaking down silos to accelerate learning and scale experimentation programs:

Building shared visibility and structured planning.

As experimentation programs expand, project management becomes just as critical as centralizing test data. Planning systems like Adobe Workfront can seamlessly integrate within overall operations to manage team requests, organize idea backlogs, and assign ownership across marketing, product, and analytics functions. Centralizing this workflow ensures every test moves through a clear lifecycle — from intake and approval to execution and reporting — while giving experimentation leadership visibility across the enterprise.

Automating experiment analysis and leadership reports.

AI can instantly summarize experiment results, uncover causal impact drivers, and produce stakeholder-ready reports. Instead of manually compiling slides or spreadsheets, teams can receive clear narratives explaining why an outcome occurred, who it impacted, and how it connects to broader business goals and KPIs — turning analysis into an active growth lever.

Generative and agentic AI are redefining how teams collaborate and work together. The value of investing in AI isn’t just speculative — organizations are already seeing significant and measurable returns. According to Adobe research:

53%

of senior executives using generative AI report major gains in team efficiency

50%

point to faster ideation and content production

The productivity benefits in the figure below extend directly into experimentation programs, where AI-led coordination accelerates testing cycle times and empowers teams to spend less time managing processes — and more time driving impact.
Senior executives’ assessment of benefits experienced from generative AI over the past year
Traditional experimentation process showing four stages taking days to months, with pain points faced during each phase.
Traditional experimentation process showing four stages taking days to months, with pain points faced during each phase.
Traditional experimentation process showing four stages taking days to months, with pain points faced during each phase.

Developing a cadence for collaboration and knowledge sharing.

Mature experimentation programs tend to arrange forums for leadership to coordinate across the business like hosting biweekly or monthly cross-functional meetings. These teams often use these sessions to review active experiments, share key wins and learnings, and agree on upcoming priorities. Regular dialogue builds institutional knowledge, fosters accountability, and ensures experimentation remains connected to wider business goals.

Aligning experimentation efforts to a customer journey-centric mindset.

When experimentation is focused only on broad acquisition or single channel tactics, teams can miss the most decisive moments of customer interactions that signal greater long-term impact to loyalty and retention. Teams can also establish global holdouts — dedicated, randomly selected user segments that are excluded from all new experiments — within the test setup process. This creates a consistent control group that measures the true incremental impact of all campaigns and journey experiments over time. Organizations can gain a clearer view of what genuinely drives growth by combining precise testing with a journey-centric framework — not just more conversions or traffic, but lasting customer value.

Scaling experimentation isn’t solely about running more tests — it’s about creating the infrastructure, alignment, and culture that makes learning continuous and operational. When teams share visibility, automate analysis, and collaborate across the entire customer journey, experimentation becomes a unified growth engine that connects insight to action across the organization.

CHALLENGE 5

Building an agentic AI experimentation infrastructure with multi-agent interoperability.

Many organizations are working hard to transform their experimentation from isolated tests to continuous, AI-guided optimization with many variations of an entire customer journey. Most quickly discover that their underlying systems simply were not designed for the agentic era of agent-to-agent workflows, adaptive experimentation, and multi-step orchestration. Experimentation is quickly shifting beyond static rules and A/B tests on single channels. The future includes context sharing, unified data access, cross-channel coordination, and standardized ways for agents to communicate and act.

This shift becomes even more pronounced as brands explore foundational layers where AI agents can be configured, governed, and seamlessly connected across a unified platform acting as a single control plane. Product, growth, and marketing teams that invest in AI-led experimentation need to customize the skills of their AI agents to include things like brand voice, data and security policies, orchestration rules, and test scenarios. These agentic skills require consistent data access, interoperability standards, and clear agent lifecycle management.

Without the right infrastructure, AI-led experimentation becomes fragmented, unreliable, or misaligned with broader marketing and growth goals for customer experience orchestration. As brands continue to adopt agentic and generative AI for use in experimentation, they must ensure that the underlying technology that powers the building and coordination of AI agents can support the next generation of AI-led experimentation workflows.

Here are five steps to consider when building the infrastructure and operating model required to unlock agentic AI for experimentation:

1. Establish unified agent configuration and governance standards.

Organizations need to ensure every AI agent acts consistently, can explain its decisions, and works within brand and regulatory policies. To help get there, they need a shared governance model with a single interface to define AI agent behaviors including tone of voice, goals, policies, skills, and data permissions. Establishing these standards to customize and extend AI agents is an important foundational approach prior to adopting agentic AI for any experimentation ecosystem.

2. Build a real-time data foundation that every agent can access for context.

Brands are evolving how they run experiments — from one-step optimizations within a single surface or channel to more AI-led experiments that can access unified customer profiles and extend across multi-step customer journeys. Product, growth, and marketing teams are building agentic AI workflows for experimentation that can access, reason, and decide based on contextual data across audiences, content, and test behaviors.

Adobe’s vision for Adobe Experience Platform Agent Orchestrator provides a deep understanding of customer behavior across web, mobile, journeys, service, content, and analytics — enabling AI agents to reason with much greater context.

Agentic architecture for customer experience orchestration
Architecture showing agents and AI-first applications layers and the six components of Adobe Experience Platform Agent Orchestrator.
Architecture showing agents and AI-first applications layers and the six components of Adobe Experience Platform Agent Orchestrator.
Architecture showing agents and AI-first applications layers and the six components of Adobe Experience Platform Agent Orchestrator.
Source: Adobe

3. Adopt interoperable protocols for AI agents to collaborate effectively.

Open standards like Model Context Protocol (MCP) and Agent-to-Agent (A2A) communication enable AI agents to perform autonomous actions by accessing data and external tools while seamlessly communicating information between AI agents. This autonomous action is essential for an experimentation agent to work with an audience or journey agent or service bots. This need for standardized agent-to-agent communication ensures secure, consistent performance for triggering actions, sharing insights, and coordinating adaptive experiment changes across surfaces, channels, or systems.

4. Create an operating model for AI agent deployment, testing, and optimization.

Experimentation teams are realizing the mindset shift from one-off testing workflows to ongoing supervision and governance of AI agents assisting in continuous reasoning, refinements, and decision-making to improve experimentation ROI. Adobe’s Agent Composer announcement is setting the path for customers and partners to fine-tune AI agent actions with a single interface for business customization and developer tooling to build, extend, and orchestrate agentic applications for expanded use cases —reinforcing an agentic operating model for experimentation teams to train, validate, and optimize agent performance.

5. Embed experimentation into orchestration layers for multi-step journey and business impact.

Technology advancements in generative and agentic AI mixed with customer journey orchestration and the power of experimentation are proving to be a core differentiator for brands. AI agents can now pass contextual learnings across acquisition, engagement, and retention touchpoints, recommend changes to messaging or engagement strategies, and coordinate actions across the end-to-end customer journey. For experimentation programs, this means tests do not live in individual channel isolations but rather become centralized across other organizations and lines of business owners beyond core experimentation and optimization teams.

By establishing an interconnected infrastructure that includes data, governance, and orchestration, teams can ensure their experimentation programs are fully equipped for agent-driven workflows. This foundation enables AI agents to collaborate, adapt experiment, and deliver reliable optimizations across every customer touchpoint while scaling experimentation ROI and business impact.

A smarter era of experimentation is underway.

Agentic AI is transforming experimentation from a manual, siloed channel optimization step in the funnel into a customer experience intelligence layer that learns, adapts, and orchestrates actions across the entire customer journey. Brands that invest early in the underlying infrastructure bringing together unified data, interoperability, and governance will be able to safely and confidently scale AI agents and enterprise systems to transform experimentation into a continuous, autonomous engine for customer experience and growth.

The challenges outlined in this guide show a shared truth — traditional experimentation practices cannot keep pace with the speed, complexity, and coordination required with new generative and agentic AI innovations. As teams evolve how they prioritize ideas, connect experiments to business strategy, operationalize cross-functional collaboration, and modernize their AI infrastructure, experimentation becomes a core differentiator for brands to understand what’s working with customers faster for smarter and more efficient business growth.

Turn experiments into AI-powered insights that drive growth.

Adobe Journey Optimizer Experimentation Accelerator brings AI-led insight discovery, automated test idea generation, and adaptive experiment workflows into a single unified hub for product, growth, and marketing teams. Experimentation Accelerator is powered by the Adobe Experience Platform Experimentation Agent to deliver faster, clearer insights from tests so teams can quickly understand what works and how to inform smarter optimization decisions.

Discover how Experimentation Accelerator can take your experimentation program to the next level.

Let’s talk about what Adobe can do for your business.

Get started