Multi-armed bandit testing

Automatically allocate traffic to the most successful experience earlier for increased conversion and revenue.


Testing brawn meets experience brain.

There can be a cost to A/B testing. While you’re establishing a winner, you’re also feeding traffic to a less effective option until the winner is found. But in this day and age, speed is everything. Maybe you’re running a promotion or you’re trying to act on the latest pop culture craze or you just need results faster, you don’t want to waste any time sending people the wrong experience.

With multi-armed bandit testing, Adobe Target helps you to solve this problem. This powerful auto-allocation feature allows you to know with certainty which variations you’re testing are more effective and then automatically funnel traffic to the winning experience. By selecting more of what works and pulling back on what doesn’t, you’ll be able to kerb risks and drive unparalleled ROI, all while continuing to learn.


See what makes it work

Faster, smarter testing
Automatically find and deliver the best experience faster and with greater certainty, ensuring increased conversions and increased revenue.

Guaranteed stopping criteria
When an experience is the clear winner, our stopping criteria lets you know straightaway so you can stop the test and move forward with the right experience. With our stopping criteria, we guarantee a 95 per cent confidence rate.

Intelligent logic
Unlike standard A/B testing, this approach allocates 80 per cent of visitors using intelligent logic, with 20 per cent randomly assigned to adapt for changing visitor behaviours.

Learn more about multi-armed bandit testing in Adobe Target.

Automate winner selections.

With auto-allocate, there’s no “control.” Learn more about how you can gain implicit comparisons across all experiences, for a true winner every time in our documentation.

Read more

Dig into how automation and testing work together.

Discover how to reduce the costs and overhead that typically come with finding the best experiences, while learning and optimising in tandem in this white paper.

Read more