[Music] [Matthew Coers] Welcome everybody to, Import Data and Export Results, how to add value to your analytics data. My name is Matt Coers. I'm an Expert Solution Consultant here at Adobe.

And today, we're going to talk about how to add value to the data that you're collecting within your analytics systems.

What I'd like to do before we get started is just take, kind of, an informal poll, and ask how many of you guys are already, kind of, migrated to Customer Journey Analytics right now, already deployed? Quite a few. Okay. How many are still on Adobe Analytics? Looks like the rest, or actually some overlap. That's good. And in terms of data collection, how many are already moving over to Web SDK right now? Okay. Looks like about a third or so. Very good.

I have a few more trickling in. Welcome.

Okay. So here's your warning. This will be a little bit of a technical presentation. I've been told I need to give that warning.

We do have, I think, a lot of great stuff we're going to cover here today, but only people who want to maximize their ROI and value should stay. Some of the side effects that might occur by attending this will be faster implementation times for those of you looking to move into Web SDK, lower level of effort, improved ROI, and faster time-to-value. Now I can't necessarily guarantee promotions and higher pay and respect from your peers, that's probably something you're going to have to handle on your own, but we're going to do our best here.

Okay. So let's talk about what we're going to learn today. Really, there's kind of four major parts. The first is going to be, kind of, going over what we generally see to be the best maturity path, both in terms of what you do with your analytics, how you add value to your data, as well as implementation. How do you approach an implementation to get to better value in a more step by step fashion? How do we get there fast, and then, kind of, take those next logical steps? Then we're going to, kind of, cover some of the more popular data ingress options. Again, with an eye toward getting to value quickly and improving value. We're not going to cover all of the ingress options, but really, kind of, focusing on what you can do first, and then those next logical steps. In terms of data egress, we're going to talk about how to democratize access to your reports, and then how you can connect to the data within Customer Journey Analytics? Again, with an eye toward improving value. And then at the end, we're going to just review very quickly some of the steps that are required to set up API connections. And this is going to be useful for doing things like setting up machine learning models or connecting with external applications that you want to integrate with Customer Journey Analytics.

Okay, so let's start off by talking about the analytics maturity path. How can you get to value very, very quickly? Generally speaking, what we advocate at Adobe is approaching this from what we call a digital-first approach. And basically, what this means is we're going to use the data that we already have.

Generally speaking, a lot of you are probably coming from Adobe Analytics. I know we have some folks here who've been using Google Analytics in the past and maybe currently using it. And what we can do with Experience Platform and with Customer Journey Analytics is bring these data source, either Adobe Analytics or Google Analytics in as a data source. So by doing this, we're not having to go back and re-instrument our websites or create something new, we're really just taking what we have already, bringing it into Customer Journey Analytics so that we can start to get those digital-first use cases knocked down, which we'll go over here in just a moment.

The second stage, once we've brought our digital properties together within Customer Journey Analytics, is to start to bring in profile and lookup datasets. Now what this is good for is to essentially add additional value to the behavioral data that we already have in our analytics systems. So profile datasets, we'll go over this in a moment, but this is going to give us the ability to understand and bring in, let's say, CRM data, things like that that add to the profiles about the users. And then lookup data allows us to extend the dimensionality of data we might have collected. And we'll go over some of those use cases. Once we have our behavioral data, our digital data, we have our profile datasets, we have our lookups where we've extended some of the dimensionality of the data, then we have the basis for creating much more meaningful machine learning algorithms that we can run against our data.

And, you know, this, I know we have some data scientists and BI folks in the crowd here, so you guys aren't going to be intimidated. I think generally speaking, this is something that allows us to use the processes that we already have in place, already have the organization, but use it more effectively.

From there, you know, we talk often when we talk about Customer Journey Analytics. We talk about bringing data from your website, bringing data from your call centers, bringing data from all of these different sources, but sometimes, we might have complications with that. There might be complications either, what I, kind of, euphemistically call organizational friction aka office politics, right? We might have silos where, you know, one group is in charge of the CRM data, another group is in charge of web data, and sometimes not everybody's playing well together. But the idea is, is that when we start, even when digital-first, we want to start these data projects because we know that sometimes they do take some time. And so I've, kind of, put third-party integrations as being the fourth step. But just understand that this is something you would have been starting previously, but we're probably going to be coming to fruition maybe once we get a little bit more mature.

And then ideally, we get to our fifth step, which is organizational alignment. This is where, you know, we've not just knocked down those data silos, but because we've now brought in these other data sources, the data, kind of, brings the people along with it. At this point, we have this, sort of, single source of truth, and we're able to actually, you know, action on our data. Everybody's operating off of the same sheet of music.

So we'll go into each of these individually and, kind of, talk about that use cases that we can achieve as we bring each of these on. I mentioned earlier about, digital-first strategy, and this is really the core.

Generally speaking, if you're using either Adobe Analytics or Google Analytics, one of the huge benefits to going ahead and bringing this stuff in, it's not just flowing the new data in on a time forward basis, but it's actually being able to do that backfill. Bring your historical data in, so that as we move up the maturity path, and we start adding additional data features and lookups and profile data, that sort of thing. We're able to actually apply that retroactively. So what we're doing is we're taking the behavioral data that we've already collected, the stuff that we might have collected a year or even two years ago, and now we're actually able to add additional value to that by increasing the dimensionality and understanding more about our users.

A lot of organizations also have historically had a goal to do report suite consolidation. Maybe, I know a lot of companies in here. I've actually worked with several of you in the past, and you've, maybe had a whole bunch of different report suites. I know some organizations have over a thousand, oftentimes, it's hundreds or certainly dozens.

And bringing that data together into a central location has been prohibitive in the sense that your solution design requirements documents, your SDRs, your variable assignments are oftentimes different from one report suite to the next. And bringing those into alignment for a global report suite within Adobe Analytics is pretty onerous. A lot of times, the development groups that created this stuff to begin with are not around anymore, or they're dispersed. And so that can be a difficult thing to do. When we bring this stuff into Customer Journey Analytics, though, we're able to align those report suites using the XDM mappings. And so we can, kind of, get around some of that difficulty.

And, of course, bringing data into Customer Journey Analytics allows us to now enable data curation. So we can use features like derived fields, if you guys have seen this. This is ways to, you know, add lookup tables to be able to do find and replace if you have certain types of data scarring that might have existed in the past where maybe marketing campaign codes were collected incorrectly, and therefore not being classified into the correct marketing channels. These types of things can be fixed now in Customer Journey Analytics. Whereas in the past with Adobe Analytics, you might have been a little bit more limited.

And so again, these are ways to add value to the data that you've already collected just by hooking your digital analytics up to Customer Journey Analytics.

Profile and lookups. So I mentioned earlier about bringing in profile data, bringing in lookup data.

Really, this is just another way, as I mentioned, to bring additional dimensionality to data you've already collected. You know, when I talked about classifying campaign codes, in the past, we've probably been somewhat limited in the sense that thinking in terms of campaign channels. But and we can certainly do that with lookup tables, but we can do much, much more. We can look at each of our campaigns and not just know what channel it was, but what was its purpose? Was this a branding campaign? Was this a campaign that was focused on a certain, you know, type of offer? These types of things. And by being able to classify our campaigns, now we have the ability to understand things like, you know, looking at a profile dataset. Maybe I know that my customer has a certain title, and they're responding to a branding campaign, or they have a certain credit rating, and they're responding to certain special types of offers that I'm sending. And so this gives us the ability to segment our data in ways that we probably didn't do before.

And of course, by doing this, what we're doing is we're increasing our knowledge, not just about the devices that we're connecting and how those devices were moving across our various apps. Now we're actually understanding a lot more about the people that we're measuring. And so this is really moving beyond devices and moving into people.

Another thing that I think is really important here when we start to think about analytics about people rather than devices is addressing the issues of the death of cookies, right? Again, we're much more concerned at this point about understanding who is visiting the site, making sure we come up with those compelling reasons to authenticate which is always a big challenge. But it's important, and it's important so that we're able to understand the people, not just their behavior.

And so basically, when we think about doing these, a lot of times, you know, I mentioned earlier about some of the organizational friction that can happen when we try to bring, you know, other data sources in, and we want to solve that. And we'll talk more about bringing data in, what some of the options are. But a lot of times, we're going to start off manually bringing some of this type of data in where we can get it, and then we start to automate those processes later as we're able to bring other parts of the organization along.

Okay, so level three, AI and machine learning models. So this is my hat tip to my BI and data science folks that are in the room.

This is where we can really start to dive deeper into our data. And I like this because this is where again, we're taking data that we've already collected. We're taking the profile information that we already have about our customers. We're taking the classifications that we've already created for our campaigns or the types of companies that we're working with. And we're taking that, and now we're starting to develop more meaningful machine learning models on our data. Now Adobe does have intelligent services that allows us to run certain types of models like propensity models or lead scoring, that kind of thing. Or you can use custom models. And as I said, we'll be, kind of, going over how to hook up to our APIs here in a moment. There's actually some other sessions that go into this stuff a little bit more deeper here, during Summit. But we'll, kind of, cover this at a high level just to, kind of, understand what some of those options are. Of course, being able to understand profile data gives us the ability to develop more meaningful user clusters. A lot of times we feel like, you know, we intuitively understand how we might want to subdivide our users. But being able to do this in a machine learning way allows us to, kind of, remove some of our personal bias that we might bring to that any of these kinds of decisions. Market basket analysis will give us the ability to look at past purchases, or look at the types of media, or videos, or content, or PDFs people are downloading, and understand what is that next logical product that they should be buying, or the media they should be consuming.

And, of course, you know, the choice of your model is going to depend on your objectives, but we mentioned propensity models. So Adobe does have propensity modeling in Experience Platform. Most organizations that I talk to also develop these types of models internally.

And I think that the choice of which one you use, kind of, depends on who you are and what you're trying to achieve. If you, you know, for organizations, oftentimes, they won't have all of the data that they want to use in the model within Adobe. In those cases, of course, you'll be using your own custom model.

But also, this allows people who maybe don't have a background in data science to run models on data and get value out of that as well. It can be done in more of a draggy droppy way.

Regression analysis, of course, is going to tell us correlations in data. Principal component analysis, this is where we're going to start to understand when we see an anomaly, or we have certain types of events. We want to understand, if we're in a B2B context.

When companies are moving through sales stages, they go from sales stage two to sales stage three, what are the most important factors that we should be looking for in terms of the attributes of those organizations or those people when they make those types of movements? So I think that, you know, again, the AI/ML level three here is really where we're creating new data features that we haven't had before, and then we re-upload that as another data source into Experience Platform, so that we can use the outputs of these models for additional segmentation. Okay. So again, adding value to data we've already collected. Maybe this might be data you collected into Adobe Analytics two years ago. Now you're on CJA, and you can give everything a whole new gear.

Okay, level four. So this is the promised third-party integrations. This is what we always talk about, right, when we think about Customer Journey Analytics. We're thinking about the ability to bring in our CRM data. We're bringing in our point of sale data. We're bringing in our call center data. And now we're actually able to track and understand all of the touchpoints that our users have, you know, with our organization, regardless of whether it's online or offline.

And, of course, all of the things we just talked about, we just, kind of, keep cycling through this, right? So now that we have these additional data sources, we're able to develop even more meaningful machine learning models. We're able to classify the data that we bring in from these third parties in ways that we might not have been able to before. And making it so that it's all available at the person level for analysis for any of the users within the organization.

Which, kind of, brings us to level 5, organizational alignment. So as I said earlier, bringing the data along also brings the people along, ideally. And because now we have this single source of truth, we can hopefully, not just break down the data silos, but break down some of the organizational silos as well. We're democratizing access to all of this behavioral data, the profile data, and the ability to activate the data across the organization. And this really, hopefully, kind of, frees up some of our data science and BI folks who would frankly rather probably not be doing all of this data engineering for marketing or, you know, other groups, frees them up to work on the higher level data science work that they'd probably rather be doing anyway.

Okay, so that's, kind of, the five levels of maturity as I, kind of, see it in terms of the organizations that I work with and what they're most successful with.

I oftentimes get a lot of questions about implementations. Now this is obviously could be its own whole session just on implementation. I'm going to cover off on a few key points here, and then we're going to deal with some of the specifics about Web SDK and so on, in future slides.

But, you know, when I talk about the digital-first approach, one of the, kind of, key points I was making is we want to start with where we are, right? If you're currently using AppMeasurement, you haven't switched to Web SDK, keep using it. You know, just get rolling. Again, digital-first start with where you are.

Now ultimately, you are going to want to move into Web SDK, and there are going to be several reasons. Right now, the most compelling reason, if you're just starting out is going to be first-party ID service. If you're dealing with cookie deprecation, first-party ID is going to be the way that you're going to be able to get more accurate unique visitor counts. You're going to start to bring down some of the ECID pollution that makes it look like there's too many unique users, things like that because of cookie deprecation. So Web SDK, I'd say where necessary at first, but ultimately, this is something you're going to want to migrate into. Obviously, anytime you're doing any kind of new feature development on your website or your apps, that's the opportunity to go ahead and implement Web SDK, and bring in additional data that you need.

And over, you know, kind of, using this process, you're going to phase out those legacy implementations. I'd say also another really compelling thing that you'll need to, kind of, address is if you have situations, and we'll cover this here in a moment, where you have a very, very legacy implementation that's setting eVars and props manually. You'll want to clean that stuff up as well.

Okay. Data ingress.

There are a ton of data sources that we offer here, and there's no way that we could go through every single one of them in detail. We're not even going to try. But there are, of course, a few that are going to be the ones that most people are using, especially at the beginning to get the value quickly.

That's obviously going to be the Adobe applications.

If you're using Adobe Analytics, if you're using CDP, AJO, that kind of thing. Yes.

You like it? Yeah. That actually went over pretty well.

I'd say we see a lot of folks using Cloud sources. This is situations where you might be already dropping data into Azure Blobs or S3 buckets. This is where you can just go ahead and pick that stuff up. We'll talk about some of those capabilities here in a moment. Obviously, connecting to databases and our streaming SDKs, Web SDK, mobile SDK. We also have a server-to-server SDK. Yes.

You know, if you guys could just, kind of, pick up the clauses. It helps. It helps me. You know? I think that most of the use cases we see are covered by out-of-the-box connectors that we have.

But in situations where maybe don't have any out-of-the-box or you have an internal data warehouse you need to connect to, we do have just generic API connectors, and, of course, the cloud storage. So if you have internal services that you can connect to or that you could create services for, these are additional ways. There's always a way to do it. It's just a question of whether it's a single hop or maybe you have to store the data someplace temporarily.

Okay. So let's run through a couple of these. We talked about continuing to-- If you're in AppMeasurement now, you're bringing data even using Web SDK into Adobe Analytics right now.

We may want to bring that data over to Customer Journey Analytics directly from Adobe Analytics. And that's where we're going to be using the analytics data connector. Now the thing I really like about this is it's a way to get going very, very quickly. Each report suite that you bring in, it's really just three primary steps that you go through. The first is you select the report suite.

Step two is you map the data. So you might have multiple report suites. Maybe in one report suite, the page name is eVar 1. In another report suite, page name is eVar 7. It's not a big deal because as you bring these over for each report suite, you're just going to map, you know, eVar 1 to page name, go to the next report suite, eVar 7. And then you name that connector and go on to the next one. So while it can, sort of, seem like a lot of work to go through these, you might have several dozen eVars and props and events per report suite. It really is mostly a repetitive process that you go through. When I work with customers, generally speaking, what I do before I even touch any of this is I create a spreadsheet, and I go through and I say, "Okay, I'm going to make my XDM fields." We have most of the most XDM fields that you'll need are already there, but you can customize it. On my far left column, I'll have the path to the XDM variable that I want to know. And then I'll have report suite 1, report suite 2, report suite 3. So let's say, for the page name variable, I've got eVar 1 for report suite 1, eVar7 for report suite2, eVar 14 for report suite 3. And I just do this in a spreadsheet, so that by the time I come into the analytics data connectors and I'm connecting this up, it's really just a very easy mapping program. And we do the same thing, which I'll show you later on, when we do a Web SDK migration as well.

Now when you do this, what's going to happen is within a few hours, your new data is going to start streaming in. And then over the course of the next few days, it will do its backfill. Now by default, when you set up a Customer Journey Analytics account, the backfill will be 13 months from Adobe Analytics. But that is configurable. So if you want more back data, more historical data, that is an option.

Another question that I oftentimes get is, "Okay, well, that's great, except not only are my eVars and props different from my report suites, but I'm just collecting different data into my different report suites." Maybe one site is actually has form data coming in, another site doesn't even have forms. And that's fine. You know, you start off with where you have points of agreement. If you have sites that just don't have those features, obviously, you're not going to collect that data. And so you have to make a judgment call, right? And that's really the point of that spreadsheet. A lot of times what I'll do is I'll highlight the rows where there's agreement. I'll highlight those rows where there's not going to be agreement and I don't care. And then I'll highlight with another color, you know, the rows where maybe we need to go back and fix something or make some adjustments. The idea is we want to make as few adjustments as we need to.

Okay. I don't want any mass hysteria. I know nobody's going to cheer for this. But the concept of manual uploads, this is one that is never very a popular subject. But I think it's a really good way to do a couple of things. In the first place, get started. It may be that it takes some time to automate the processes of bringing data from a data warehouse or bringing maybe you have to get agreement from another group within your company to start automating data out of your CRM system. But you might be able to export some of this data into just a flat file in order to get some of these new data features into Customer Journey Analytics. We want to do this for a couple of reasons, right? Number one, we want to get the value as quickly as possible. Number two, we want to see whether the data has value. If we bring it in here, are people going to use it? You know, what's the point of doing this big data project if nobody's going to use the data to begin with? So it's a good way to, kind of, try it out. It's also a good way to get started while you're going through your more formal integration projects.

Data landing zone. Okay, so we talked about all the various, you know, kind of, cloud locations and so on. Some organizations, it might be difficult to get cloud storage set up for these purposes. So in those situations, all AEP customers have access to data landing zone. Any data that you put in here, and this can be used this actually, kind of, straddles both the data ingress, as well as egress. Any data you put in here can be used for either importing or exporting data, and it'll have a time to live of seven days. Now you can manage this. This is just an Azure Blob Storage on the back end, and you can manage it in way however you normally would manage your Azure Blob Storage. So whether you're using their web interface, Azure Storage Explorer, they have a Command Line or APIs, that kind of thing.

Okay. SDK implementations for streaming data.

This is where I'd like to, kind of, I don't know, bring down some of the concerns some organizations have. I know I've talked to several of you as people were filtering in about, kind of, where you are in terms of Web SDK implementations. There's actually three different SDKs that I mentioned earlier, Web SDK, Mobile SDK, and Server-to-Server. I think that, you know, kind of, the use cases, of course, kind of, stretch out. Mobile SDK is probably the most obvious. Web SDK is very obvious. Server-to-Server can also be used in situations where you might have regulatory concerns. You need to bring data in that stream data in. And if you're, let's say, having trouble getting, I say, client-side access to data. So there are reasons why you would use server-to-server. For right now, we'll, kind of, focus mostly on Web SDK, and how you can get to implementations very, very quickly.

Now I'll say, for most implementations, we want to focus on implementation via configuration rather than going back and reinstrumenting your applications.

We want to keep those-- You probably are going to have to do application changes, but to the extent that you're doing them, we want to make that as minimal as we can. Most of you guys probably already have a data layer. And so again, we don't want to do application changes that require us to make new data layers if you already have one. And generally speaking, you're not going to need to touch it.

So basically, what we're going to be doing is, we're going to be mapping your data layer that you already have, right? And you're already collecting data probably into Adobe Analytics or Google Analytics. And if you're using Google data layer, we can actually slurp that up as well and use that. So either way, as long as you have a data layer, we can probably use it.

There are some situations where you might need to make adjustments to a data layer. This first bullet point here on the bottom section is an example of that. If you have single page applications, and this is true whether you're using Adobe Analytics or Customer Journey Analytics, you're going to want to use an event-driven data layer for those in order to more accurately track screen views and things like that. So that's a situation where just technically you'll need to make some changes.

As I mentioned briefly earlier, if you're hard-coding eVars and props in your page code, which you shouldn't be doing, but if you are, this is your opportunity to fix that and create a data layer.

If you're integrating with other Adobe applications, AJO is an example target, this is where you're going to want to be able to take advantage of edge data collection, you know, get those classifications into, you know, segments that might be activated, things like that. In those situations, again, this isn't really an analytics specific thing, but if we're integrating with other Adobe products, Web SDK is going to be the way you're going to want to go.

And first-party ID service. So cookie deprecation is a big deal. And we have-- I know I've talked to several of the people in this room about first-party ID, creating a server-side cookie and using that cookie as a UUID to seed Adobe's ECIDs. If you don't know what I'm talking about, that's okay. We can talk about that off stage or whatever. But first-party ID is going to be a reason why you're going to want to implement Web SDK to deal with cookie deprecation.

So just to, kind of, hammer this point home, you know, application changes are expensive, whereas configuration changes within tags are going to be relatively cheap. We probably are never going to get away from doing any application changes, but we want to keep those changes minimal.

Okay. So hopefully, you guys can, kind of, see this. I'm going to go through very quickly, hopefully, to dispel some of the concern about doing Web SDK implementations.

Because in this case, what I have is a website that a colleague of mine, Gentry Lynn, created to demonstrate Web SDK implementations. And there is going to be a little bit of code that we're going to go through here. But I think it'll be understandable even for non-technical folks.

So you'll see here that we have a product page. And this product page is selling these gorgeous white T-shirts.

Hopefully, the applause will go off again.

And what I'm going to do is I'm going to just show you how we can actually change the mappings of something without having to reinstrument the site so that we can start to collect the data into Web SDK rather than the Legacy AppMeasurement method.

So you'll see that what I've highlighted here with the red box is in the data layer, that in the console, the JavaScript data layer, you can see that we have the digitalDataobject.page and then product ID is the variable that's holding the SKU, okay? And I've, kind of, written here at the bottom, the JS path is digitalData.page.productID So just hold that in your brain for a second. That's the thing we're going to be mapping.

The way we would do this with AppMeasurement is we would create a data element, and we would name it. In this case, this is named product ID. Name it whatever you want. And then we just map it to that JavaScript path that I just showed you, digitalData.page.productID So this is how we would create data elements. And then we would map this using the AppMeasurement module. We would map this into whatever eVar or prop or if we needed to put this into an event.

And so in this case, I think, I had it going to eVar 6. So we're mapping eVar 6 to product ID data element. Hopefully, that's clear enough. And so this is the way we've done it with AppMeasurement.

Okay, here's the big scary Web SDK thing that we're going to do is, well, normally, we would take that, we would load the variables, and we would send it. The big scary thing we're going to do is we're going to take that same data element that's accessing that same object, right? We're not changing any of that. All we're doing is mapping it into the XDM schema.

That's the big change. All right. So it's really just reusing things we already have. Again, implementation via configuration as much as possible, and then you just go through this. Again, if you have that spreadsheet that I mentioned, this now is the next use of that spreadsheet. We know the variables we want to collect, we know where they go in the schema, and then we just go through each of our properties, remap it, and then instead of using AppMeasurement to send the data, now we're using the Web SDK, whatever thing in tags in order to send the data.

Okay.

Datasets, connections, and views. One of my colleagues, Nils Engel, is doing a session on this later today. If you are not in that session, I highly recommend it. Just watch it, you know, the recording later. He's brilliant. He covers this in much greater detail. I'm going to do this very cursory right now.

As we are talking about, we're bringing in all of these different datasets. You know, we have our web data. You know, maybe it's Web SDK, maybe it's AppMeasurement, but regardless, it ends up in a dataset within Experience Platform.

We talked about adding our lookups and our profile datasets. So, you know, we have our customer lookup here. We've done our integrations at this point. We've gotten to level four. We have integrations with our CRM system or our call centers. We've done our machine learning modeling. And so now we have our propensity scoring or our clusters or whatever we've created from that. So we have all of these datasets. Now what happens when we bring data into platform and bring it into Customer Journey Analytics, is that we create a connection that joins that data at the person level using some sort of a person ID. And there's several different methods of doing that. Most organizations use field-based stitching to have, kind of, a person ID, as well as their ECID that gets created from their first-party ID service. From that connection, we create multiple views. And I've actually talked to, I think, several people here about views in the past. This is where we can customize the presentation layer of the data, the reporting.

Views, you can create as many as you want. These do not change the underlying data, and we don't charge extra money for creating new views. So the idea is we do charge based on the connection. So the reason I show one connection here is we want to have one connection that has our data and then create the views off of that in order to be more efficient with budget.

But we're going to breeze past this. Again, go see or watch Nils' presentation for a lot more on data views and connections.

Okay. Data egress. Well, just like with data ingress, there's a ton of options. If you go through the Data Sources tab within Experience Platform, you'll see the library, the full library of data destinations, as well as sources that you can work with.

Again, a lot of times, these are going to be going to Cloud destinations because you're going to be picking it up from another system or accessing directly.

One of the most obvious ways of exporting data is just to share your reports. I think we were talking about this earlier, that having the ability to share to other people within your organization is pretty critical. There's also the ability to share with people outside your organization. Now this is something that can be governed. Okay, it's not that you're going to give, initially, everybody the ability to share everything, but this does give you the ability. Let's say, you have partners that need to see certain data, you might create views that restrict down to what they just need to see, and then share to them without giving them access to your analytics systems.

And, of course, you can export this data just like with Adobe Analytics. You can export it as a PDF or CSV, and you can schedule your exports on whatever cadence you need.

Another one that is, I think, a really awesome flex, if you're Real-Time Customer Data Platform, RT-CDP customer. Most of the visualizations within Customer Journey Analytics allow you to export audiences. So this is really excellent for doing things like audience discovery. I'm going through, I have some fallout analysis that I've performed here. I notice there's a shelf drop between step two and step three. I want to isolate those users that did not complete that step, and I want to nurture them back, right? And so now I can create audiences, export them to CDP. Again, we're dropping a lot of that, sort of, organizational friction. One thing I want to, kind of, flex on here though is the ability to not just do this again with your digital data. There might be situations where you have users that are going from online to offline. Maybe they're scheduling appointments. We've had several use cases like this, where they might schedule an appointment online, but then fail to show up in the store, or whatever the appointment was for. You know, what do we do in terms of nurturing that? Well, because we have visibility into both datasets because, you know, we can see that you signed up here, but you never made it into the dealership to take the car for a test drive, now we can know that that occurred, and we can nurture you to come back in to the dealership, that kind of thing.

Full-table export. So this is a new feature with Customer Journey Analytics. If in the past, if you've used data warehouses, a lot of people have been asking, what are you guys going to do? When are you going to have data warehouse extracts available for CJA? The good news is we're not going to do that.

We've got full-table export, which is way better.

So if you think about the way that data warehouse extracts worked, or work still, you know, you can export the data that you collected, and it'll be augmented with some of the processing that Adobe Analytics does, but you're not able to export some of the additional data features you might have created. As an example, you won't be able to include your calculated metrics and things like that. Well, with Customer Journey Analytics, full-table export gets rid of those limitations. You're not having to go to a separate screen, a separate system, and create this separate export, and go back and forth and try to remember what you need to put in there because you literally just do this directly from the interface. You make the table that you want. You do your full-table export. You can actually add up to, I think it's, yeah, up to five dimensions. And so just like with data warehouse extracts, you could have multiple dimensions. And this can be used for a number of different things. Obviously, being able to export the table itself, you can also bring your derived fields. So for those who have seen or understood or used derived fields, you know that this adds a lot of value to your data that you've already collected. You can have lookups. You can, you know, as I was explaining before, if you have situations where data might have been incorrectly collected or it was collected inconsistently, I've seen use cases where maybe job titles were collected, where, you know, you might have some people type in their title as senior VP and others are SVP. So how do you normalize that, right? So being able to use things like derived dimensions to curate your data, to make it more consistent so that when you do this export, you're not just exporting the raw garbage, you're actually exporting the data that's already been fixed to a certain degree.

And, of course, just like the other forms of export here, full-table export is going to respect your privacy labels that you've put on your data. So, you know, especially in larger organizations, like most of you are, everybody doesn't need to know what the rules are, right? If you're trying to export something that shouldn't be exported because of privacy uses, you know, policies that you have on the data, it won't let you make those types of mistakes.

Okay. Data feeds. Data feeds has always been a feature of Adobe Analytics that was probably used as much, if not more than data warehouse.

This is the way that you can export data, the raw data that's collected into Adobe Analytics. Well, Customer Journey Analytics answers that with dataset export. So what this allows you to do is not just export the event datasets, but you can also export your profile datasets. And so if you have your CDP, you have data coming in, you're updating your profiles, you can export, do a full export of all of the data, and then incrementally update the changes in your export over a period of time. Of course, scheduling it, all of the data usage, labeling and enforcement, so you're not violating any of your policies, that sort of thing. All of that is supported.

Okay. So setting up an API connection. Now I warned you at the beginning that this is a technical presentation. You are going to see some code. Don't get concerned. I have at the end of this presentation, I'm going to have a link to my GitHub where you can download all the code that I'm going to show you. It's not going to be that much code anyway, but you can download it.

I have some examples of a couple different types of machine learning models that you can run against data within Customer Journey Analytics. Those are there, and I have how to videos going step by step through everything I'm going to show you here. So you don't have to remember this, but I'm going to cover it, and then go to the GitHub, if you or pass my GitHub link to your colleagues that might be interested, and you'll be able to get to that separately. So there are essentially five steps for setting up an API connection.

The first is we're going to set up an API endpoint.

Then by setting up the API endpoint, it's going to give us our OAuth server-to-server credentials that we're going to use within our applications.

We're going to use Customer Journey Analytics to actually create our query.

And then we're going to use the reporting API to execute the query and bring the data back into our application. In the case that I'm going to show you is going to be in Python. So we're going to dump that into a pandas DataFrame.

And then we're going to be done. So very quickly, just to, kind of, run through this process, creating an API project, you're just going to go to developer.adobe.com/console and there's going to be this button here, Create new project.

From there, you're going to add two APIs to your project. You're going to add the Experience Platform API, and, of course, Customer Journey Analytics API.

And then you're going to specify your authentication method. Now in the past, if you have set up API connections with Adobe, you probably would have used the JWT authentication method. That is now deprecated. So if you have the option, use OAuth, but it should give you a prompt like this telling you not to.

And from there, it will show you your credentials. And so you're going to have several credentials here. The client ID is going to be your API key. It'll give you your the various fields that you need to fill in the code.

The trick and, kind of, the cheat sheet that I'm going to give you, this is, kind of, a little trick that you can know from now on, is that if you enable debugging within Customer Journey Analytics, and you get to that by going underneath the help menu and just enable debugging, you will see in the upper right-hand corner, I should have highlighted it, but there's a little bug icon at the top of every visualization within Customer Journey Analytics. If you click on that, it's going to show you what the JSON query was used to create that visualization. So all of the visualizations within CJA are created, kind of, API first, and the idea here is you don't want to have to try to write that JSON query yourself, right? I wouldn't know how to do it either. And so, but by doing this, you get this, sort of, cheat sheet that says, "Well, okay, this is the visualization I have. I want to recreate this programmatically. Here's the JSON query to do it." And then you can just program if you want to pass in fields through your application to change out dates or whatever, you can do that programmatically. But that gets you 95% of the way to getting your query that you need to execute.

And then this is just again, just a quick code example. On the left is a very simple application that executes against the reporting API. On the right is that JSON query I mentioned. I do what I do in my application is I just make the JSON query have its own file and I import it into my application. And that way, I can swap that out programmatically if I want to have the same app run different queries against CJA.

So hopefully, that's clear enough. But again, you don't need to remember any of that. I'm going to give you my link to my GitHub, and you can just download all of this stuff and go through it and watch the video, and you'll see me actually executing all of this in real-time.

Okay. So key takeaways from all of this.

If I could just, kind of, impress upon you four things to remember, start digital-first, right? Start with where you're at. If you're using AppMeasurement, keep using it. Do your Web SDK implementations over time, and do that via configuration as much as possible rather than reinstrumenting. Generally speaking, you don't need to create new data layers or anything like that. You can probably use mostly what you already have.

Don't over automate. There may be certain data features you know you want. Other stuff, you might want to try it out, see if it's useful. It might take time to automate some things. So starting manually allows you to get the value faster with that data, and then fill in with automation as you're able to bring those systems online. And just remember to continually think about how can I add new value to the data that I've already collected, right? Using lookups, dimensionalizing the dimensions we've already created, bringing in our profile data, and adding machine learning models to create new data features out of the data that we already have about our customers.

Okay. So this is my GitHub. If you like to take a picture of that, to either use yourselves or for your colleagues. I'll, kind of, leave that up. [Music]

In-person on-demand session

Import Data and Export Results: Adding Value to Analytics Data - S112

Closed captions in English can be accessed in the video player.

Share this page

Sign in to add to your favorites

SPEAKERS

Featured Products

Session Resources

Sign in to download session resources

ABOUT THE SESSION

Siloed customer data is a common business challenge. Digital analytics is often used separately from other data sources like call centers and point-of-sale systems. Adobe Customer Journey Analytics (CJA) breaks down barriers by integrating data into one system for a single view of the customer and accelerated time to insight. Learn about integrating CJA with external data systems, developing custom machine learning models, and connecting BI tools to production data in the Adobe Experience Platform.

Join us to explore:

  • Various methods of data import and export
  • Use cases for data integrations
  • How to plan an analytics maturity path

Track: Analytics

Presentation Style: Tips and tricks

Audience Type: Digital analyst, Data scientist, Marketing analyst, Data practitioner, IT professional, Marketing technologist

Technical Level: Intermediate, Advanced

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.


By accessing resources linked on this page ("Session Resources"), you agree that 1. Resources are Sample Files per our Terms of Use and 2. you will use Session Resources solely as directed by the applicable speaker.

ADOBE GENSTUDIO

Meet Adobe GenStudio, a generative AI-first product to unite and accelerate your content supply chain.