[Music] [Ryan Conrad Tincknell] Hello. Thank you all for coming. We really appreciate you sticking out to the end. I know it's tough to stay out, so we're very honored that you stuck around for our session. It means a lot to us. We hope that, maybe we've saved one of the best sessions for last for you. We're also happy to answer questions afterwards. So I'm very honored here today to be presenting about how Home Depot has helped to streamline and automate their testing creation process with my colleague, Apurva Sandbhor. So, Apurva, why don't you go ahead and give them a little bit of background about yourself? [Apurva Sandbhor] Yeah. Definitely. Hey, everybody. Thank you for being here today. So I lead the Experimentation and Personalization team at Home Depot. I started as a software engineer in the experimentation space, gradually learning every facet of this amazing field. With over a decade of experience, I specialize in bridging engineering, business strategy, and data science. Accelerating decision-making through experimentation turning data into action and insights into growth. I'm really passionate about building high-impact systems, one of which we'll be talking about today in detail. When I think about the achievements or my achievements or what we have done in Home Depot for experimentation, I like to group it into two parts, program optimization and technology evolution.
I lead a very dynamic product-driven team that enable experimentation and personalization across 45 plus cross-functional squads. These squads are essentially like balanced teams. They have their own product manager there, the engineers, the data scientists, the UX partner, everybody together working on solving maybe a part of the customer journey, maybe a particular feature, something that interesting. But you can imagine the amount of collaboration we have to ensure, the effective communication we have to ensure across all of those 45 squads while sitting as a centralized, unbiased party.
It's hard. - No pressure. - No pressure, definitely.
We also have a center of excellence, a CoE team, inside of experimentation. Our aim here is to, not just think about the culture of experimentation, but also to advance through program maturity.
In the last year itself, we developed 11 frameworks which enabled us to unlock capabilities which were not there with us before, to do experiments which we didn't even think about before. And that has been a great win. When we talk about technology evolution and experimentation for Home Depot, one of the key things that is very striking for us is the scalable frameworks we built for content feature and algorithm testing. This helped us do faster building and deployment of tests. One quick example I can give you is, before the frameworks, if we wanted to do a content test, you had to go to the product manager, to the UX partner. "Can you give me this image? Upload it into a server and give me the image URL." Then we have developers writing the code for it and making the image swap. Now it's not like that. Now the CMS, The Home Depot CMS talks directly to Adobe Target, letting them know that, "Okay, this is the location, this is the image, let's make the swap." This definitely helped us unlock a lot more capabilities related to personalization and scale the number of tests we do. The second thing we did which helped a lot with scalability, which helped a lot with delivering faster experiments was how we embedded automation into the experimentation lifecycle. This improved our measurement accuracy. This helped with improving the speed to insights as well. All in all, this has all contributed to the scale, culture, and innovation of experimentation within Home Depot.
How many years have you been at it? - Huh? - How many years have you been at that? It's not like it happened overnight. No. It did not. And we're still evolving. We started two to three years ago with all of these efforts, and we are still adding to all of this that we have done. We are not done yet. Yes. So I think that comes in for my introduction. So Ryan Tincknell, I'm an Enterprise Architect for the consulting team with Adobe. So I partner with companies like Apurva's and Home Depot to help them with a roadmap for how they want to mature in their technology use. It's not just the technical configuration, it's the operational adoption, the learning, the standardization of how it's used. And I'm sure many of you are aware, if you put someone who doesn't understand how to test into Adobe Target to build a test, you could blow up your site. So that's one of the challenges, though, with creating program levels for experimentation, I'm sure, is that risk, which we're going to dig into a little bit also. But it's a pain point that I've personally felt. My whole career has been based on experimentation. I started in marketing research back before website personalization and testing was a thing. So leading new product development for craft, surveys, small intercepts across a number of industries. So when I saw an opportunity to get into website optimization through paid media in pay-per-click in 2006, I started splitting my own ad copies to different landing pages to optimize it just manually, and then moved up gradually as I learned about Adobe Target and Adobe platforms after they bought out Omniture to then help lead a broader scale experimentation programs. But I always felt the pain of trying to move up the volume of tests. We showed that we had created value, but we always hit the ceiling that we couldn't expand past in terms of volume and scalability.
So what are we going to talk about today? Before we go ahead, Ryan, can you share a fun fact? Okay. He always has these very interesting facts. Listen to it, guys, please. I'm going to get one from you too. - Yes. - My daughter is 15. She's got her learner's permit, but she's hopelessly addicted to Starbucks. So every day to school, she wants to drive because then she can force me or my wife to go to Starbucks because she's driving. I'm also the descendant of two presidents, John Adams and John Quincy Adams, but you can tell which one came to mind first. So if there's anybody from Starbucks in the room, I want to talk to you afterwards. It's your fault. Yep. You want one for me too? - Yes. - Okay. So I love reading a lot, but I like those physical books. I don't like Kindle. My husband has tried to gift me one, but I've never opened it. I love taking handwritten notes. I have like this big collection of fountain and ink pens that I use. So if you are in a video, like a call with me, you'll always see me taking notes while talking to you too. So I do like that. And a more-- And that's what her one swag request was. Yes. I'll speak for you. Adobe, the books. I'll speak for Adobe, but I want that stationary, definitely. On a more personal note, I have a one-year-old baby boy whose recent hobby has been looking at cars go by. He doesn't know how to say cars, so he'll just say, "Vroom, vroom." And that's like, when he says vroom, vroom, we have to go and show him a car. That's it.
Ryan, while you do that, let me talk about what this session is all about. Here we go. Yeah. We're going straight into the agenda. Yes. In this session, we'll be talking about how Home Depot evolved experimentation as a product, why we did that, what challenges we face, and all the key learnings that we have. Along the way, we realized that experimentation being truly accessible, truly democratized is very crucial for us. We will also share what it took to get there and how we iteratively improved that. I don't want to give too much upfront, but the results were extraordinary. And where we are headed next is truly exciting.
So where we were two years ago, right? Two years ago, the program was more ad hoc. It was more as a service, as a support group. Like you have a test, you walk up to somebody, and they'll be like, "Okay, let's use Adobe Target, let's launch this." Now it's more as a product, experimentation as a product.
Changing or transforming it so helped us make the process very efficient, helped us scale the number of tests we do, helped us not just improve the quantity of tests, but also the quality of the test because we were able to now have more measurement accuracy, have more trust in the data that we were sharing with our stakeholders. One of the key points there was how we established a very streamlined process.
We adopted Adobe Workfront as our program management tool at this stage.
Whenever a stakeholder has a request for us, they have to fill a very standardized intake form where we understand from them why they are doing this test, what they want to improve, what's the hypothesis, what's the key success metric they are trying to improve, etcetera. All of this happens in Workfront. After that comes a more of a process which is a product lifecycle or a software development lifecycle, where we assess the feasibility of the request.
Is it technically feasible to do this? Is it analytically feasible too? Do we have all the right analytical tracking that we need to collect the data for this test? And also, whatever we are trying to achieve, is it possible in the timeline that is set for us? But think about this concept. Experimentation is a business product. Many companies view A/B testing as something very tactical, sometimes as an afterthought, but it really drives the organization to be data-driven. And the elements that Apurva is talking about are all important to help grow your footprint if you're in charge of an experimentation program to frame it the right way based on how it can drive that decision-making. True.
Now we are part of every product manager's roadmap. If they have a product that they need to deliver, right under that line item, you're going to see an A/B test for this product, something like that. That helped us make sure that we are scaling and expanding the impact of experimentation too.
Of course, we go through the software development. You are building the test. You are QAing it. You are making sure the SREs know that you're launching something, you're changing something on the site. You launch the test. Now what happens is the test is live, the test ends, and this is where the automated pipeline kicks in.
Our experimentation analysts don't need to go in and pull the data from our data sources anymore. The automation pipeline takes care of that. All of the data is pulled in. The statistical analysis is done. Outliers are removed.
What our experimentation analysts, our experimentation subject matter experts need to do now is come in there, help interpret the data, and make sure you have insights for the stakeholders. Make sure they are actionable. If the feature, if the aim was to improve the engagement of the feature, yes, we saw the engagement improved, but people did not convert as much. People did not buy as much. What happened there? Maybe they added to cart, and something bottom of the funnel did not work with them. Help work with the stakeholders to interpret where in the customer journey did the feature fail for them, or where in the customer journey did the customer preferences change, something like that. Are they only looking at a purchase? Are they just focused on a purchase metric to optimize? No. We have over 450 metrics in our data dictionary right now. We do make sure, though, with any of the features or any of the A/B tests or experiments we launched that we are not harming the revenue metrics statistically. That's the thing we do check. But you're right, it's not always the revenue metrics. It could be as simple as engagement. It could be as simple as did somebody even view the feature. Something like that.
Okay, so we have built this report now. We are sharing it with the stakeholders, reviewing with them. And when they are all okay, we have understood that all this is good, we close the loop of experimentation. There is a closeout form now in Workfront itself that makes sure that we are recording the learnings from our experiment. Is it a win or a loss? What else did we learn from that? Were there any other improvements that we do along with this? What were our recommendations, etcetera? This helps us close the loop. This helps us record all of the data that needs to be recorded. If my leaders now come in and ask me, "Apurva, how many tests did we do last quarter?" I don't have to go and pull an Excel sheet, try to figure out what it is. I can go in Workfront, within a matter of two minutes, pull that data, and inform my leaders. In fact, even give leadership a pretty view of the data that they can look at it whenever they want, like a live dashboard, without having to build the dashboard. So that definitely helped us. This sounds like it takes a lot of people, though, to manage and maintain this. What does your team structure look like? Right now, yes. You're right. It does take people to maintain this, but it's not a separate skillset or a separate set of people we are looking at. We do make sure that our experimentation analysts themselves, who are driving the end-to-end life cycle of an experiment, keep track of all of this. But it's easier now to do that because of the Adobe Workfront tool. All the communication now gets, basically, for anything we want to understand about the experiment or inform stakeholders about the experiment, it is all done through Adobe Workfront.
From a process standpoint, it sounds like there's handoffs and checks. Is that a complicated process? Is there manual steps involved? There are. There are manual steps involved, especially when we are interacting with stakeholders. You want a data asset. You want to understand the metric definition, maybe. You want an API to develop that experiment. You still need to go ahead and talk to that particular stakeholder.
Try and get a meeting maybe on IT engineer's calendar to try and understand how that API works. There's still a lot of emails. There's still a lot of meetings. One of the number one complaints that my team has is the back-to-back meetings, and they hate that. Yeah. I've experienced this with other companies too, and I feel like it makes me mentally dizzy sometimes, especially when you start getting up the volume of tests that Home Depot's producing. Yep. And that must have resulted in some challenges that you've experienced across this process. Yes. We have. As The Home Depot's experimentation program grew, we realized that we have reached the limits of a centrally managed approach. We are not able to scale the number of tests that we do. As we try to scale experimentation...
Across the enterprise, across the organization, we end up introducing more operational challenges that affect our efficiency, adoption, and overall program effectiveness.
Let's break these down, right? Let's talk about development effort. Right now at Home Depot, it takes a lot of time for us to design and develop the test. So there are two ways you could do that, right? There are two ways you could develop a test. One could be as easy as-- Okay, let's work with the experimentation team and develop the test. But you still have to go and talk to maybe the engineers or the data scientists. Let's consider an example, right? You want to change the model for a recommendation container. You're going to the data scientists, you're going to your product managers or engineers to get the API for it, understand the structure, actually go ahead and write the code for it, make the change, and then deploy the test. The other way around could be you're working very closely with IT engineering partners. But that still involves a lot of back and forth. There is still a lot of communication to make things happen, to even work hybridly like that.
This just slows down the whole process.
The second part is program management. Yes, we had this great tool. We had Adobe Workfront to help us with that.
But you saw the process. There were so many approvals. There were so many steps involved that it started becoming cumbersome. With hundreds of tests in pipeline, you can imagine how the experimentation team now became a bottleneck again slowing down the whole process. And how long would it take when you need these approvals for them to turn that around? Everyone will have their own SLA. So I can give you the overall numbers. Before all of this, the end-to-end lifecycle of an experiment from concept to the end of the experiment was 60 to 70 days. Now we have been able to reduce it down to 21 days. So you can imagine how many handoffs were involved, how slow the process was.
The third is learning curve. Many of our stakeholders did not understand the nuances of experimentation. Yes, we had thorough documentation. We had thorough training. My experimentation analyst did make sure that our stakeholders understand these concepts. But still, the learning curve was an impact to us mainly because we were expanding experimentation across the organization. So you would always have some new stakeholders coming in. You would always have people leave the organization, leave the company, move here and there, which again caused-- - Just a bevel. - Exactly. Which again caused retraining and everything.
This also led to the fourth challenge, adoption. Because of the cumbersome process, slow process, many people were not ready to work with us or were hesitant to work with experimentation because of the slow process that was there. Takes too much effort. - Exactly. - What am I going to get from it? Yes.
So has anybody else experience-- Is this resonating with you? Have you seen this challenge in your own organization? I've seen it personally myself across a number of different companies, which is why I was so excited about the opportunity, I think, that was presented to Apurva. And when she came to me, I thought it was a tremendous opportunity, and it was worth sharing with you. Yep. Definitely. The challenges we faced made us think that, "Yes, we have to evolve our approach, and that was the birth of AutoTest." We used the tools that we had to scale experimentation in a more automated, in a more decentralized workflow. In this case, we used Adobe Workfront and Adobe Target. But what was the catalyst, right? Ryan, as you said, we faced these challenges before too. But there was a trigger for us. One of the teams at Home Depot used to use Google Optimize for their experimentation needs. They used to independently run tests. But as you know, Google announced the sunset of Optimize. Now they had no other tool to work with. They were trying to figure out how to work with us, but they were not ready to or their business needs did not suit the centralized approach that we had. They wanted more control. They wanted more control and hands-on approach in the way the tests were designed and launched.
If we would have stuck with the traditional way of doing things with them, either of two things would have happened, right? They would have said, "No, we don't want to work with you." And they would have gone and found another tool that they could work with, and that's just, again, a messed up way of doing things for a big org like-- Not even to mention the risk that that incurs by having multiple optimization tools that can't talk to each other, potentially undermining each other. True. And it just leads to competition within orgs, which we don't want. That's not the way to grow. The other thing that would have happened is they would have just given and accepted, "Okay, let's do it in the centralized way," but now we need more resources to do it because it's a completely new set of business and a completely new set of stakeholders we would be handling. We didn't want that. We wanted to evolve further. We wanted an approach which gave better autonomy, better flexibility. Our centralized approach was not giving us that. This is where decentralized came into focus.
So how did you approach this once-in-a-lifetime opportunity to finally win over this team that had been resisting you for years? Yes. It was mainly about recognizing their requirements, but also recognizing the risks involved. With decentralization, we knew there was a risk of fragmentation. There was a risk of quality loss, consistency loss, which we had strived to achieve over the last two, three years.
Based on all of this, we started defining our requirements for the product. The requirements fell into three buckets. One was automation. Of course, we had to scale this. We wanted to scale it without increasing the number of resources. Of course, automation was the key.
Second was governance.
We wanted to make sure the right set of people are accessing the right set of functions and tools in this new integration. There comes governance. The third was standards.
We had worked over the last two, three years to instill best practices as we expanded across the org. We had to make sure that there are guardrails in place, standards being followed as people outside of our team started to launch tests. And how many people have seen a stakeholder or a leader say, "Just go prove that I'm right with this test." And then when the results come back different, they're like, "We'll just change the results." I've been experiencing this back in my marketing research days when they didn't like the survey results. It's still true today.
And more the leader knows about data, the more of a blessing and a curse it becomes.
So we use all of these requirements, all of that to outline a streamlined workflow for us. Show of hands, how many of you guys have used the Adobe Workfront tool? Quite a few of you. Quite a few of you. But those of you who haven't, you can work with Adobe. But for now, please take our word for it. It's a very easy to use tool. You want to build a report, just click on it and enter a few details and the report is built. We wanted to maintain the same accessibility, the same user-friendliness with this tool too. If you want to do this, let's just go ahead and open Adobe Target activity request. If you want to add audience to it, just go ahead and add audience. If you want to add offers to it, just go ahead and add offers and submit the request. So easy for any non-technical person to do it. But I want to make sure you realize what this is. This isn't just a normal workflow. This activates the test in Adobe Target automatically by API. It bidirectionally syncs together, and we're going to get to that in a minute. And these guardrails, I think, are the real gold of this because it enforces those best practices for people where they don't have the time for that learning curve to understand how to do it themselves properly. We can conditionally stop them from launching a test if they're doing it wrong. Yeah. You're right. In this case, if somebody submitted a request which is all good, which follows your standards, of course, you can go ahead and approve it. But let's say they didn't follow the rules. Let's say Ryan comes in, and he's trying to launch a test on home page and push it 100% on day one. That's a no-no. That's not going to happen. That's a guardrail right there. That's where we make sure that, "Okay, Apurva, Ryan is doing this. What do we want to do?" We get an automated email and automated trigger notification that lets me know that, lets my team know that, and we can take the next steps as our guidelines tell us. Unfortunately, I also got it triggered to my boss, and then I got assigned a remediation track of videos that I had to watch so I could learn better about testing if I wanted to do it again. Yes. If Ryan does that again and again, I can definitely escalate it very easily.
But does this have all of the capabilities that you've built in? Is there something I couldn't do? No. There's everything you can do and more than in Adobe Target. So you can, of course, create activities, your offers, audiences. But the guardrails, the approval steps, that's what's new over here. Let's say there is already a stakeholder who has launched a test on home page, which is running for this week, Monday to Friday. But I come in there and I say that, no, no, no, I want to launch another home page test today, which would essentially end up overlapping that test. I get an automatic trigger that lets me know that. Now it's on you, right? It's on your organization and best practices to let us know. If it's overlapping, "Okay, fine, this works for us. Let's go ahead." If it doesn't work for you all, you can immediately let the stakeholder know. Wait. The end date of this test is Friday. You can launch your test on Monday, Saturday, whenever you want to. But aren't my tests all going to get mixed together with all of the other 45 divisions? How am I going to find that? We can easily do that, right? This is where Workfront and Target helps you with that. That program management comes into focus, and which is, again, automatically done. - Partitioning. - Exactly. Partitioning. Got it. How about the rest of the pieces, though? Doesn't it need other things to build the test? Yeah. We, of course, have audiences as we do in Target. We have offers too. If you have any audiences and offers already in Adobe Target, you can easily sync them up in Workfront as well. So you don't need to go ahead and recreate. But if you want to recreate, you have an option for that too.
The same complex way of creating audiences, like adding the logic operators of and/or, include, exclude, all of that still exists. We can still make sure that we can create audiences and offers as we need it. But what if you wanted just one division to not be able to create their own audiences? - Could you exclude them for that? - Yes. That's where our access groups, our governance come into focus. We can easily partition. We can easily let know that, "No, I don't want Ryan's team messing with my test. Ryan, you cannot access my project." I know, I already doomed myself earlier by pushing it 100% and in three seconds losing $100 million from Home Depot. Yep.
Thank you for not firing me. But this couldn't have just been flipped on overnight, right? It was not. It was a very methodical phased approach, a crawl-walk-run stage to make this happen.
Don't worry about the flow and the diagram over here. I know it looks intense, but it's all done now. If you want to adopt this, you don't need to go through all of this. The tool is ready. The tool is repeatable and configurable as you and your organization needs it. This was just the process. Thinking about that, establishing experimentation as a business product, we had to go through this process to prove out to leadership that what we're building could work, but then it's going to help to drive value. And so we had to go through multiple phases, early pilot programs. Why don't you tell us a bit more about how that rolled out? Yes. There were three phases for us. Phase one was more proof of concept. Yes, we have this amazing idea, but is it even technically feasible to do it with the tools in hand? That's what we proved in this stage. We had some early adopters internally look at it, play with it, and help us figure out, is it what we wanted it to be? We used this demonstration to talk to our senior leadership, get the buy-in we needed, and help prove that how this could drive value. Then came phase two. We started adding features and capabilities that Home Depot needed. In this case, it was audiences and offers. We also made sure we ran an end-to-end experiment into production, which helped us understand not just the technical feasibility of the tool, but also helped us map the process that needs to be in there to make sure this happens seamlessly. Then came phase three.
This is where, as Ryan likes to say, all of the magic began. The guardrails came into focus.
The guardrails about how to launch a test. The guardrails, a checklist that everyone needs to go through before launching or even designing any test. It could be as simple as a naming convention to follow for a test name, or as complicated as make sure the tests don't overlap. Send us automated triggers if you see any of those. And these are all configurable as you need it to be, as your organization needs it to be. And us having experienced everyone trying everything to change a test or to subvert the system over the past 10, 15 years, we had a lot of ideas already. We didn't need as much input from others. Yes. If anything, we thought of things that they didn't. Yes. We knew all other tricks that our stakeholders could pull and break the system. We tried to break it on our own, and iteratively kept improving and adding more guardrails to it. So this was a long journey. It embarked over the course of a year, three phases of work, multiple rounds, early pilot programs. But the great news is that Target API and Fusion API are the same for every company. It doesn't change for Home Depot. It doesn't change for Starbucks, which means it's all very replicable for those fusion scenarios. So what we're doing now is what I think is really exciting. It's creating basically a extension off of your content supply chain to activate out to experience endpoints, in this case, with Adobe Target on-site. And you can benefit from all the hard work and pain that Home Depot's gone through in a very short cycle for much less effort total. So just an example of how other companies are starting to take advantage of this to roll it out. We're actively involved with multiple different major enterprise companies across industries. And we're taking into account their individual feature and capability needs. We talked about some groups shouldn't be able to build audiences. They should just be taking those audiences in from the Experience Cloud connections, from RTCDP passed over. We don't want them to build those audiences. And the way that we've designed the fusion scenarios make it very componentizable. So if you want to replicate this, you replicate the parts that matter to you. But other companies are also looking to add-on features. So we have some other major companies looking at recommendations API, other capabilities, even other channels where this whole fusion blueprint can now be applied potentially to email, direct paid media, other areas where we could also automate. So think about this as you're establishing your business product for experimentation. It shouldn't be tactical just on-site. This is where you can grow your footprint. Experimentation and automation can happen everywhere centrally through workflow as an extension of your content supply chain. Yep.
So I think where does the rubber meet the road? What did this do for Home Depot, Apurva? So this is my favorite slide. I know Ryan is taking his time to reveal the numbers.
Okay. So the key benefits of this. We saw a dramatic increase in the test volume. Many more teams are willing to do testing with us, increasing the impact of experimentation across the org now.
There is a streamlined process, a streamlined workflow now. My team doesn't need to toggle between Adobe Target and Adobe Workfront. There is one centralized tool that does both. There's process execution, there is test setup as well, making the experience seamless, reducing the back and forth communications. There is more ownership and accountability on the test requester on the stakeholder side, which helped us a lot too, mainly with faster test launches. We reduced the time and time we spent on the design and the development of experiments by 86%, accelerating experimentation cycles. The third one, this is my favorite. Cutting down resource requirements by 83%. Again, this does not mean I started laying off people in Home Depot. All I mean is we started redirecting these resources away from repetitive tasks, away from these back and forth communications and meetings into more meaningful work.
Each of my experimentation analysts have a side project or have projects like innovative sprints, which they can focus on along with working on experiments. It could be improving our sample size calculator. It could be contributing to our data automations, making it faster, making it efficient. It could be figuring out which other experiment type we could focus on. That definitely helped us a lot. One thing in this though, which I would like everybody to think about is navigating this change with automation and AI though.
One of the perception that the team had was, "Apurva and the team are automating a lot of things. Does this mean it costs my job now?" It was not so. We were not laying off people. The job perception was changing. We were elevating. We were upscaling. Sorry. Upscaling, but not eliminating. That was the key idea over there. I know, when I've been in that spot, I thought when someone started taking my job away from me that I'd, like the next step was me getting cut. - Yeah. - So how did you help turn that around? I think it was more about awareness. The team knew why we were doing this, what we were doing, and what was their contribution to it. They were aware about what next steps were coming their way, what exciting work was coming their way.
Like, I remember team members saying, we are excited that we have something cool to work on. We are excited that we have a nice bullet point to add to our resume now, which is not just facilitation anymore. They were excited about all of this. I know there is no way to quantify this, Ryan, but boosting team morale is one of the key benefit that I like to focus on over here.
It also helps that we took all the tasks they didn't really like and automated it up to. Yeah.
So but where do we go from here? So just because we've rolled this out, we've seen great benefits, doesn't mean we stop here. And you've probably heard about it. In fact, I think they just changed the name, for the project. It's JO Experiment Optimizer, where it helps to build hypotheses. So while this builds a test and we think about a content supply chain, we've got the offers being designed. We need to help them craft a good hypothesis. So just like there's a learning curve to building a proper test, the actual hypothesis, the KPIs with it also takes some help. So we've been very excited to partner with the Adobe product team that's building out this component that will live in both Adobe Target and also AJO that can help with uncovering opportunities for tests in terms of performance numbers and then craft that hypothesis to guide these teams. In and it can be partitioned from what we've seen down to individual divisions. So 45 different divisions kept their own specific view for the test performance. So we're excited in proving out this, and perhaps we'll be back here next year to speak about how we're able to get more people involved with building quality tests and not pet project tests.
I know I've had at least a few where someone says I want to do a test on 20 people.
So that's what excites me about this because then all of the hundreds of KPIs that Apurva mentioned that she runs on tests, I don't think you illustrated this. It's not that she has an array that go to different tests. She runs 200 KPI stats for every test. We can put those all in and power that hypothesis generator too is what we're going to look to do. Yep. So I'm very excited-- One more thing, Ryan, you remember I requested was it's not just about the idea generation. Of course, a great experiment requires a great idea. It's also about validating that hypothesis. I don't know how many of-- If you would relate with this. We have stakeholders who would come in with a hypothesis. Somewhere down the line change the primary success metric or change the hypothesis altogether while the test is going on. It's just crazy. Validating that hypothesis beforehand and not mid flight would be very helpful to us and also in an automated fashion. Totally. I just have no words for that one. I lost some words. So my shameless plug. ACS was honored to be a part of this program to help support the design, the rollout phases, the testing process, the configuration of all of the fusion scenarios, making sure that it would end up the right way in Adobe Target, but that all of the elements inside Adobe Target were also coming back into Workfront in real-time with the full catalog or partial catalog if we only wanted them to see certain things. So here and you'll see the video running that actually shows it in action for how easy it is to click and dropdown tests. But we went through all of the testing components, the different phase of work, and we'd be happy to support you as well. So if there's any questions, we've got two microphones here. We'd be happy to talk, just openly here or even after the session with you. But thank you.
[Music]