Feedback by design: Better AI requires better human collaboration.

Research from the Adobe Experience Platform AI team reveals four critical feedback barriers in conversational AI and shows how thoughtful design can transform users from frustrated instructors into effective collaborators.

Generative AI is no longer a sandbox experiment. Inside Adobe Experience Platform, it's building audiences, launching campaigns, and shaping real customer experiences. And when it gets something slightly wrong, the cost is no longer theoretical — it's missed revenue, off-brand messaging, and eroded trust.

Yet here's the paradox: the most critical input these systems need — human feedback — is the hardest to get. An analysis of over one million ChatGPT conversations found that only 3.89% contained any user feedback at all. And much of it was too vague to act on: “wrong,” “try again,” “not quite right”.

However, we humans are brilliant at giving feedback in other contexts — coaching athletes, mentoring colleagues, and editing drafts. So why does feedback collapse the moment we interact with AI?

A core insight from one of our latest research projects is that AI feedback requires thoughtful design. The feedback interface is not a wrapper around intelligence, but an integral part of it.

The feedback breakdown.

Across interviews and prototyping sessions, a research team from Adobe and Johns Hopkins University repeatedly saw the same failure patterns emerge. Published in the proceedings of CHI 2026 — the premier international conference on human-computer interaction — their work identifies four systematic feedback barriers that prevent users from providing clear, specific input AI systems need.

The four feedback barriers.

  • Common ground: In human conversation, we build shared contexts naturally. But conversational AI often loses track of goals across turns, forcing users into exhausting cycles of re-explanation. When the AI forgets constraints mentioned even just a few messages ago, users give up easily or waste energy constantly re-establishing context.
  • Verifiability: More often than not, users can't easily verify whether their feedback was understood and incorporated. This opacity creates a dilemma: either blindly accept the AI's output or manually fact-check every detail. Neither supports productive collaboration.
  • Communication: Articulating precisely what's wrong — and what would fix it — is cognitively demanding. When users struggle to express nuanced corrections, they resort to vague signals like “make it better” that give the AI no actionable direction.
  • Informativeness: Even when users know what to say, the friction of typing detailed corrections outweighs the perceived benefit. Small repetition costs accumulate, pushing users toward low-effort strategies — abandonment or acceptance — rather than iterative refinement.

Together, these barriers create a compounding and vicious cycle, and small misunderstandings escalate into abandoned workflows. When users can't verify whether feedback was incorporated, they're less motivated to articulate detailed corrections, leading to shallow feedback that fails to establish shared understanding.

From barriers to bridges: Three design principles.

The research team didn't just identify problems — they derived three design principles and a total of seven scaffold components for overcoming these barriers, then built and tested a system called FeedbackGPT that brings them to life. Let’s explore the design principles in detail.

Preserve shared context.

Instead of relying solely on an ephemeral chat stream, effective systems need to externalize goals and constraints into a shared, editable workspace.

Scaffold components:

  • Inline comments and highlights: Users anchor feedback directly to specific text spans, eliminating vague references like “the second paragraph”. Annotations persist visibly in a sidebar and pass back as structured constraints, preventing context drift.
  • Undo and redo: Conversation snapshots enable experimentation without losing hard-won context — crucial for enterprise tasks where starting over means significant lost time.

Lower the cost of feedback.

Rather than placing the entire burden of articulation on users, systems should actively participate in maintaining clarity.

Scaffold components:

  • Feedback huddle: When users struggle to articulate feedback, a focused space opens where the AI asks targeted clarifying questions, transforming vague reactions like “this doesn't sound right” into concrete, actionable instructions.
  • Quick actions: Common feedback patterns become one-tap operations — “regenerate using only highlighted changes,” “make more concise,” “add more technical detail” — lowering the effort required for rich guidance.
  • Feedback evaluation: Real-time guidance helps users refine input before sending, teaching them to provide clearer feedback with minimal extra effort.

Make changes verifiable.

Making the feedback loop visible and auditable sustains engagement over time.

Scaffold components:

  • Explanation: A one-click control prompts the AI to detail which context it used, which instructions it prioritized, and how it interpreted feedback, enabling users to diagnose whether poor outcomes stem from their input, the AI's reasoning, or both.
  • Split-view comparison: Side-by-side evaluation of response versions generated from different feedback makes comparative assessment fast and visual, preventing users from blindly accepting the first plausible output.
Screenshot of the FeedbackGPT interface showing tools for giving and reviewing feedback, including comments and highlights, undo and redo controls, a feedback huddle panel, quick actions, feedback evaluation, explanations, and a split-view comparison of changes.

Proven results: From barriers to better feedback.

In a controlled study with 20 participants completing goal-oriented tasks, FeedbackGPT demonstrated significant improvements across three dimensions of feedback quality:

  • Goal-referenced feedback increased by 26 percentage points (from 32% to 58%) with no participant abandoning and restarting conversations — a common failure mode with baseline interfaces.
  • Actionable feedback increased by 25 percentage points (from 66% to 91%) as users provided precise corrections without lengthy explanations.
  • Progressive feedback more than doubled (585 vs. 242 characters per turn on average) with users supplying significantly richer information to guide the AI.

The overall feedback engagement has increased significantly. One participant noted: “I feel I own the Feedback GPT output more because I have better control.”

While the boost in engagement did involve a trade-off: participants noted higher cognitive load. They found this “productive friction” acceptable because the interaction felt like genuine collaboration, as another participant told us: “It feels more like interacting with a human because you can point somewhere and they know what you're talking about.”

Chart comparing FeedbackGPT and ChatGPT across four feedback quality measures, showing higher scores for FeedbackGPT in goal-referenced, actionable, and articulate feedback, as well as greater feedback length per turn.

What this means for Adobe conversational AI.

These findings have direct implications for enterprise AI products like AI Assistant in Adobe Experience Platform and Adobe Brand Concierge, both powered by Adobe Experience Platform Agent Orchestrator.

In complex enterprise workflows — analyzing customer journeys, generating brand-aligned marketing copy, orchestrating multi-step data transformations — the quality of human-AI collaboration directly impacts business outcomes. When marketers can't guide AI to understand nuanced brand voice, or analysts can't communicate segmentation requirements, the result isn't frustration alone — it's missed opportunities, wasted time, and diminished trust.

The scaffold-based approach demonstrated in this research represents a blueprint for future iterations of Adobe's conversational AI:

  • For AI Assistant in Adobe Experience Platform: Imagine querying customer data with inline comments to mark which insights are valuable and which miss the mark, a feedback huddle to refine complex query logic through back-and-forth clarification, and a split-view comparison to evaluate different analytical approaches side by side.
  • For Adobe Brand Concierge: Picture highlighting sections of an AI-generated copy that nail your brand voice while annotating areas that need refinement, implementing quick actions to apply brand guidelines to specific passages, and reviewing explanations that reveal how the AI interpreted your brand assets and constraints.

Adobe Experience Platform Agent Orchestrator, the underlying platform, can leverage structured, high-quality feedback data — span-level highlights, huddle transcripts, prompt evaluations — to continuously improve not just individual interactions but the models themselves through more nuanced reinforcement learning from human feedback (RLHF) and preference optimization.

Pairing interactive design with AI capabilities.

While this research demonstrates the power of thoughtful interaction design, overcoming feedback barriers also requires parallel advances in core AI capabilities:

  • Robust context management: Give models longer context windows as they need robust attention and memory that prevent the “lost-in-the-middle” effect burdening users with constant re-contextualization.
  • Verifiability by design: Advance calibration (reliable confidence scores), principled abstention (knowing when to ask for help), and inline citation (grounding claims in evidence) to shift the burden of fact-checking away from users.
  • Collaborative dialogue training: Move beyond single-turn instruction following to multi-turn interactions where models proactively seek clarification, offer suggestions, and engage in genuine exchanges.

Meet the humans behind the AI.

We sat down with the co-authors, Zheng Zhang and Nikhil Sharma, to learn more about what drove this work.

Q: What motivated this research?

A: We kept seeing this pattern where users would struggle to guide our AI systems, not because the models weren't capable, but because the interaction paradigm didn't support effective feedback. People are naturally good at giving feedback in other contexts — coaching, teaching, and collaborating with colleagues. We wanted to understand what was different about AI interaction and what we could do about it.

Q: What surprised you the most?

A: Two things stood out. First, the improvement in user feedback with the right scaffolds — 26 percentage point increase is huge. Second, the “productive friction” trade-off. Users were working harder but felt more in control and took greater ownership of outputs. That tension between effort and empowerment suggests we need to think carefully about where cognitive load should sit in human-AI collaboration.

Q: How does this research inform Adobe's product roadmap?

This work directly influences how we're thinking about the next generation of Adobe Experience Platform Agent Orchestrator capabilities. The scaffolds we tested — inline comments, feedback huddles, comparison views — aren't just research prototypes. They represent interaction patterns essential for enterprise AI products where the stakes are high and users need genuine control.

Q: What's next?

A: The feedback collected through scaffolded interfaces is incredibly rich — much more nuanced than thumbs up or down signals. We're excited about how this structured feedback can improve not just individual conversations but also feed back into model training through more sophisticated RLHF pipelines. We're also exploring how these principles apply across modalities — voice, multimodal interfaces — and domains where human-AI collaboration is critical.

Learn more

Details of our work can be found in the full paper here. If building generative AI at an enterprise scale excites you — explore the latest highlights and career opportunities at the Adobe Experience Platform AI site.

Paper authors: Nikhil Sharma, Zheng Zhang, Daniel Lee, Namita Krishnan, Guang-Jie Ren, Ziang Xiao, Yunyao Li

Guang-Jie Ren, Namita Krishnan, Huong Vu, Yunyao Li also contributed to this post.

Let’s talk about what Adobe can do for your business.

Get started