Turning Insights to Gold with Federated Audience Composition

[Music] [Richin Jain] Hello, everyone. Welcome to the last session of the day. You all made it.

So let me begin with defining the problem definition. Marketers today are drowning in data...

Yet they lack insights from it. You have customer interactions coming in from website, emails, ads, and social media, yet converting that raw data into meaningful AI driven insights seems impossible.

What if you could unify your data in a single data lake and just talk to your data, like chatting with a colleague, and get real time insights out of it and convert those insights into actions. So that's what Microsoft Fabric and our integration with Adobe AEP can make possible. I'm Richin Jain, program manager on the Microsoft Fabric team and I help ISVs and enterprises build on Fabric to provide value to our joint customers. And I'm joined here by my colleague from Adobe, Abhijit. [Abhijit Ghosh] Hi, I'm Abhijit Ghosh. I'm part of the Real-Time CDP product management team. I help build product that helps our customers solve their business problems. So happy to be co-presenting with Richin. Thank you. So here's what we're going to talk about. What is Fabric and why Fabric? Then we're going to look at OneLake, which is the unified data lake powering Fabric.

Then we will look at some of the AI experiences within Microsoft Fabric and on how we are democratizing access to GenAI across all personas. And then we will have a demo of how Fabric and AEP work together.

So, before I talk about Fabric, if there was one word that was common through all the sessions at Summit, it was probably AI, right? And this session is no exception. And so we all agree that AI is transforming the world. There's no doubt about it, right? But AI is as good as the data that it gets to work with, right? Even with the best of the models out there, if you put garbage in, you're most likely to get garbage out. So that's why it becomes really important for enterprises to get their data state ready for AI.

Unfortunately, it's much more complex and much more expensive than it needs to be. And nothing represents it better than this slide which shows the data in AI landscape and how fragmented the whole ecosystem is.

There are hundreds of players across each category, each solving part of the problem.

And so what this translates into is fragmented data, siloed tools and operational challenges for marketers to get insights.

So, at Microsoft, we understand this challenge very deeply. That's why we're bringing everything together, all of our tools together, so they work seamlessly, in what we call as Copilot in AI stack. At the bottom of the stack, we provide the best in class infrastructure for model training. That's where all the LLMs are being trained. On top of that, best-in-class data platform for you to bring in data that is ready for AI. And on top of that, through Azure AI Foundry, providing models, state-of-the-art models from Meta, DeepSeek, Cohere and such. And then to make this AI accessible for everyone, we are integrating different tools across the Microsoft ecosystem like Copilot Studio, Visual Studio, Teams, M365 so that AI is present and available for you, and it's contextual.

And we are also converging all of our data products within Microsoft Azure to Microsoft Fabric. So Fabric now becomes like the data layer of the Copilot side.

So before I dive into Fabric, can I just have a quick show of hands of how many people have heard of Fabric or used Fabric? Quite a few. Okay.

How many people have used Office here? Microsoft Office? Everyone, right? So, I like to compare Fabric to Office, right? Office is a suite of productivity tools, right? Each of the tools within Office is optimized for a specific task, Word, for creating a document, PowerPoint to create a presentation, and Excel to analyze a spreadsheet.

But what makes Office powerful is not these individual applications but the common experience they share. So, no matter which tool you use, there's a common interface and common collaboration tools across them.

And at a deeper level, they share a unified architecture and unified storage and a unified security framework. So, Fabric is the office for analytics. It works the same way with Fabric, we are bringing different analytics service from Microsoft together in a unified manner.

So be it for data transformation or if you're building a warehouse or if you're just getting insights using Power BI.

All of these workloads or services are individual Azure services today. These have been around for quite many years. They are battle tested. But with Fabric, we have optimized them to work in a single format. So, they all speak the same language and work with OneLake so that they read and write in the common data format.

At the foundation layer, OneLake becomes the storage layer across all the workloads. So it's a common storage bucket for them to read and write from.

And one of the advantages Fabric had because we are one of the late comers to this whole data platform thing...

Is that it was born in the GenAI era, right? So, AI is not just a add-on within Fabric. It's deeply infused and deeply integrated across all Fabric experiences. And that is something you'll see across all the demos that I'll show today.

And Fabric also-- OneLake also integrates with Microsoft Purview, which is Microsoft's governance tool. So, you get lineage and tracking out of the box.

So, with Fabric, what we have done is we have SaaSified the whole experience. So, you get the value out of the box. There is no VMs to manage. There is no clusters to take care of. And everything just works. All the workloads, all of these different workloads, think of them as five different vendors that otherwise you would have to manage. And they're all part of-- They're all drawing compute from a single compute pool. So, they all speak the same compute unit as well.

We have also opened up the platform, so it's extensible. So, we have extended it from some of our industry solution. These are industry patterns to provide templates for some key industry verticals like retail and finance. But we are also working with some of our key ISVs like Neo4j and LSEG for them to bring in their experience as native Fabric workload.

So I know it's a lot to go through in a single slide. So here's a quick demo that shows the real power of Fabric and an end to end manner. [Man] Let's take a look at how easy it is to get started on a new project in Microsoft Fabric. Fabric brings together every data and analytics workload into a single, seamlessly unified experience. And even better, we've integrated Copilot everywhere to get you started quickly and supercharge your productivity. Opening up a workload, I have tailored getting started content and can create related artifacts with a single click. But I can also jumpstart a new project using task flows. Task flows provide predefined templates for common data and analytics design patterns to accelerate my work. Here's one for real-time event analytics, and here we have a medallion architecture for structuring data enrichment processes. When I pick a task flow, immediately, it provides a diagram that both guides and organizes me and my team's work. For each part of the task flow, Fabric suggests the types of items we should create. And, of course, I can customize it for my specific needs. For example, let's add a new step for alerting and tracking data. And now let's create a new data warehouse where we'll land our bronze data. All I have to do is give it a name, and I'm instantly navigated to the warehouse experience. Everything is auto-provisioned, and I don't have to set up any clusters, networking, or storage accounts. As a first step, let's bring some data into my warehouse. I can navigate to my pipeline experience in a single click and choose from a wide range of connectors that will bring data in at petabyte scale. I'm going to choose Azure SQL as my source, select all the tables I want to copy, specify the column mappings, and that's it. And now I can start customizing and managing the pipeline. But instead of needing to know how to do this manually, I can use Copilot. Copilot is built into every Fabric experience and lets me use natural language chat to get my work done. Here, let's ask Copilot to schedule this pipeline to run each night at 11 PM. And just like that, Copilot takes care of configuring the pipeline schedule.

Navigating back to the warehouse, I can see my tables have been automatically created for me. My data has been brought in. I can explore the data by filtering it directly in the table view or by writing my own SQL queries. And Copilot is built in here too. Copilot is amazing at writing SQL. With just a comment, I can ask for something like sales for only our red products. And just like that, Copilot has the query for me. Having Copilot seamlessly built in my query editor is a huge productivity boost. And so is having everything in Fabric seamlessly integrated together. With just another click, I can move to the reporting tab and go ahead and build a Power BI report right on top of my data with no extra copies of data, and it's all fully optimized for the best performance. And here, again, I have Copilot built into Power BI so I can get help building my report. Let's get some suggestions on the type of report I might want to build, and I'm going to adjust this product performance suggestion to include a breakdown of orders by category. And just like that, Copilot has got me started with a beautiful report page I can slice and dice and configure for my needs. As you've seen, Fabric seamlessly integrates every data and analytics workload into a truly unified experience. And with Copilot integrated everywhere, it's easier than ever for every data professional to get started quickly and upscale their abilities.

Pretty cool. So you saw how quickly you can get from data all the way to visualization with Deep Copilot integration to help you throughout the way.

So let's talk about OneLake, which is the unified data lake powering Fabric. As I mentioned, all of the workloads sit on top of OneLake, so they read and write. OneLake is built on top of open source Apache Delta Parquet format. So, there's no vendor lock in. You can even read from OneLake using Databricks, for example.

And OneLake allows you to bring your data in from different sources, from different clouds, from on-prem, from S3 compliant sources.

We have two key capabilities within OneLake that allows you to unify your data. One is Shortcuts. And I'm going to be covering that in much more detail. But Shortcuts basically allow you to reference your data wherever it sits without any data movement. And Mirroring allows you to create a replica of your database in Fabric without any ETL on your part.

And through this and through other approaches what our vision is for you to unify your data wherever it sits in OneLake and then apply these best analytic tools on top of it.

So since OneLake is the foundation, there are many approaches to get data into OneLake.

Data Factory is the inbuilt data transformation and data movement tool within Fabric.

APIs, probably everyone has used it here, that just is another mechanism for you to push batch or real-time data into OneLake.

Using Shortcuts, you can reference your data to other clouds without any data movement. Data Sharing, this is something specific to Fabric, in which we allow-- If there are two Fabric tenants, you can seamlessly share data between those in a secure and reliable manner.

And lastly, with Database Mirroring, you can have up to data replica within Fabric of your operational databases.

So I'm going to be focusing on those three that those were highlighted. Let's first talk about Data Factory. So with Data Factory, there are hundreds of connectors that are available today. You can connect to on-prem sources, other clouds, other data providers, SAP, Salesforce, and so on. And bring the data into Fabric, land into Fabric.

You can apply transformation on the fly as you're moving the data in. Data Factory automatically converts your data...

From some open formats CSV, JSON, Avro to Delta Lake so that when the data lands into OneLake, it is readily available to be consumed by all the other engines that you saw, right? So as you are moving it in, you don't have to do any other transformation. Data Factory will take care of that for you.

Data Factory also has AI in built in it. So outside of what you saw through Copilot, there are AI functions that are part of Data Factory. So you can do sentiment analysis, text classification, text analytics as part of your transformation as you're moving data in. So what this results in is that you don't have to move data in one format and then transform it again as it's sitting in OneLake. You can do everything on the fly as you're moving it in.

We support low code, no code, drag and drop, authoring experience. So it's very easy to get started. And we also partner very closely with Informatica and Fivetran, who are the other ELT and ETL providers. So there are more sources that are supported.

Shortcuts, I talked about how it-- Basically, it's a virtual link to your data that is sitting outside of Fabric, outside of OneLake. If you have used Shortcuts on Windows or Linux, it's the same concept that it...

Makes you think that the data is natively there, but you're referencing it from a remote location.

You can today create shortcut to Amazon S3, Google Cloud Storage. If there's any other file system that has S3 APIs on top of it, S3 APIs is a industry wide API format on top of file systems. We support that.

And you can even create Shortcuts from within Fabric itself. So if there's a Lakehouse in a different workspace that you want to access, you can bring it in using Shortcuts. So, it basically makes, rather than you creating different copies of data or moving things around, you can just reference it wherever it sits. And why is it important? It's because especially the multi-cloud Shortcuts is because enterprises have already made investments in huge data lakes outside of Fabric. So, we acknowledge that fact, right? But we want them to bring the data within Fabric. And that's why we let the data sit wherever it is. Let them, let it be governed and managed centrally wherever it is. But make it accessible within Fabric. So, they can benefit from all the different services that we have.

And lastly, talking about Mirroring. So, if you have databases running in Azure or outside of Azure, you can bring it in today without Mirroring using ETL, right? You can use Data Factory to move data from source to sync. But then you have ETL pipeline to manage. You have a compute to manage. You need to keep the replica up to date, and you need to pay for it, right? And you also need to pay for network ingress and all this, right? With Mirroring, it's all covered. There's zero ETL on your part and it's free. Microsoft even pays for the storage of your replica up to a certain extent based on the SKU that you have.

And so there are some databases that are natively supported in Fabric for Mirroring. But we understand that we cannot scale that way. So we have opened up the Mirroring APIs and SDK for any of our partners to connect. So we have Oracle and Stream and others who are integrated today with the open Mirroring part.

So just to bring everything together, you can bring your data into OneLake through Data Factory, through Shortcuts, through REST APIs, through Mirroring, and then use any of the Fabric engines to transform the data, make it ready for AI, and then use different tools from Microsoft ecosystem and get insights out of it. So just to show like how all of those things work together to unify everything in OneLake, let me run through a quick demo.

We've seen how every workload in Fabric can work with the same one copy of data when it is in OneLake. But we don't always want to copy data or build and maintain complex ETL processes, especially if we have existing investments we want to leverage. Contoso Outdoors is a classic example of this. If we look at where Contoso Outdoors is storing and managing their data, you can see that they, like a lot of organizations, have data everywhere. In Snowflake, there is the customer loyalty program, Azure Databricks has the sales data, Dataverse has the customer support systems, Cosmos DB has the retail order tracking, and we can get our product inventory and our product catalog from Amazon S3 Buckets. With Fabric, we're going to create a new Lakehouse where we can access all of these sources in a single unified location that is always in sync to the underlying systems with zero ETL. Let's start with our data in Azure Databricks. We can create a shortcut directly to the data since Fabric natively works with the data in Delta Parquet format. I just provide the connection information. And just like that, I can now access the data through OneLake in Fabric with zero data movement. I can do the same thing with Dataverse and Amazon S3. And if I jump ahead, you can see the data from each of the sources here.

And you can see we've created shortcuts to more than just tables of data. We also created shortcuts to the images from our product catalog and to text documentation files for our products. We can also use these for building solutions in Fabric or for other services like Azure AI Studio so they can access the data through OneLake. Next, we're going to bring in our data from Snowflake and Cosmos DB. For these sources, we're going to use Fabric's new database Mirroring capability. Database Mirroring automatically reflects data from Snowflake directly into OneLake and keeps it in sync with every change. This means my customer loyalty program from Snowflake can be accessed seamlessly in Fabric with zero ETL. And it's always kept in sync automatically by Fabric. Once the Mirroring is set up and running, my data is ready to be used across every workload in Fabric. I've also set up Mirroring to Cosmos DB and just like my Snowflake data, it's being reflected into Fabric and kept in sync and ready for me to use.

Let's go back to our Lakehouse. Now I have the data from all five different clouds, integrated in minutes into a single unified Lakehouse in Fabric. With one click, I can switch to the SQL endpoint in Fabric, and I can write a single SQL query that joins data from each of the five different clouds, showing how I can work with all of this data in a unified way. If we switch over to the lineage view, you can see this even more clearly. We have a full view of downstream and upstream lineage, and we can see the Snowflake, Azure Databricks, Dataverse, Amazon S3, and Cosmos DB data all coming together and ready for my organization to use. For example, here we've created a machine learning model to predict demand and inventory levels for the upcoming holiday season. We also have Power BI for reporting on the data. Let's open up this report that shows order status for our loyalty program members. This report is using Power BI's direct lake mode so all the data is always up to date, and I can slice and dice the data with blazing fast performance. Since the data is all coming from one copy and OneLake, it also means we can deploy one security model for the data that flows through all these solutions. We can see the sensitivity label flowing through to every downstream artifact along with any data level security we apply. And finally, if I switch over to the Purview Hub in Fabric, I can see a complete view of all of our data assets. This helps me understand data sensitivity, endorsement, and usage of all my Fabric items. As you've seen, Microsoft Fabric is a gamechanger for enabling organizations to create a unified data estate, spending all of their data, and supporting every analytics workload in one easy to use experience.

Okay. I promise this is the last recorded video. After that, we only have live demos. So, you just saw how OneLake unifies data from different sources and without any data movement. So let's just look at-- You have the data now. But what really matters is how we can get insights out of the data. And that's where how quickly you can enable your users to apply GenAI on top of that data. So, in Fabric, we do it through two key experiences through Copilots. You saw most of it through the demos. We have Copilot integrated through all the experiences in Fabric. Copilot works great out of the box. It is finetuned for that specific task or specific experience that you are in. So if you're in data warehouse experience, it will help you write SQL query, for example. So, it's fine tuned for that experience. But we also allow you to create custom GenAI on top of your data. So you can give it a slice of data or to your LLM models, similar to RAG based architectures, and then ask questions on top of it. So this screen just shows how Copilot is deeply integrated through all the different experiences. So a data engineer, for example, can write SQL queries, can create data pipelines, write spark code and so on. A business user can create reports. We just saw an example of that and can just talk to data in natural language and get insights. And data scientists and developers, they can create models and do transformations using these Copilot experiences. And important thing to highlight here is, we are constantly working to improve these Copilots. So, as new models get released, you get a new Copilot.

Okay.

So Copilot takes you all the way there.

What Copilot cannot do is that it does not have knowledge of your business taxonomy, business terminology, and the custom industry data that you have, right? So that's why we want to make it easy for our customers so that they can apply these LLM custom AI on top of their own data. So I want to switch over to my screen here and show you what we call as AI skill. How easy it is to create AI skill in Fabric, grounded to domain of data, as within OneLake, and then just simply start asking questions.

So what I can do here is, I'm in my workspace in Fabric, I can go to create a new item and I can say I want to create AI skill.

So with that, I can just provide it a name.

Just a test tool.

And it will create a skill for me. It would ask me to connect to a data source and give me a chat pane on which I can use to ask questions to it. I've already created a skill here that I want to just go through because as you can see through the responses as well that it takes few seconds for responses to come back. So in the interest of time, I've ran some queries on it. The dataset on the left here is the customer churn dataset. So I've selected five tables on location, demographics, and customer status.

And so, I first started with just asking, like-- I cannot actually see it.

Just asking, like, give me details. I didn't know anything about the data. So I just said give me more details about the data. So it came back with the name of the tables, how many rows there are, and sort of the data distribution in there.

Then I asked the next question to it around asking, how many-- What is the percentage of customers that have churned in the last 12 months? So as you can see, it didn't get a response to me and it failed, which I think is a good outcome because it didn't hallucinate and just produced any output, right? So it failed there. But then I asked the same question, just right after, and it came back with an answer. Even in that response, you can see that it kind of failed in the beginning, and then it auto-corrected itself. So it came back with 26% users that have churned in the last 12 months. And then I keep asking questions around...

The average tenure of churned versus retained customer.

What type of contract each of those customers had, so you can ask this series of questions. And it's actually really good insight. And one thing that it is doing and why it is taking 20, 30 seconds to run it is because as you type these questions, it is taking those questions, converting it using NL to SQL API, converting it from natural language to SQL and running it against the database, figuring out which tables you actually are asking about.

So there's this interesting, all of these questions. And then, lastly, like I asked, if I have to create a customer churn model, what will be the good fields which I could use to create that model? So it gave me a good recommendation of certain fields with explanation of why they would be useful to create that model. So let's actually go through and run a question. I also have customer reviews as part of the dataset in which customers explain why they left, why they churned. And so let's run this question and see just to classify those reviews into a certain category.

Okay. So it did fail, and that's...

One of the things with live demos always. This query just worked right before this demo. So let's run it again. But while it is doing that, one of the other things that you could do with AI skills is you can pass it instructions. So you see a single instruction there. But if there is a certain column that you wanted to consider when you are running queries, you can provide those instructions. Think of it as system instructions that you pass to ChatGPT, for example, right? It's the same concept here. You can also give it example queries for it to get results for you so that you can provide through the example of this tab here.

And lastly, you can share this AI skill with other users. And you can make sure that they have read access or you can even give them a data access, for example. So with this, it really makes it very easy for you to apply GenAI on top of your data.

So let me go back to the deck here.

So insights are only valuable if you can drive actions out of it. And that's where our integration with Adobe AEP comes into play. So with this integration, Adobe AEP can seamlessly connect to Fabric data warehouse, create and refine audiences in Fabric, and then activate those audiences in AEP without any data movement. So for that, I would like to invite Abhijit to give more details. Thank you, Richin. This was extremely helpful for me and a lot insightful as well.

So I think as Richin mentioned, right, we see a explosion of data, as it relates to how consumers are engaging with our brand or how they're interacting with the different channels that they work with and which leads into the whole problem of too much data. How do we make sense of the data? How that needs to be arranged? How can we cut down the digital noise to really get down to what matters to me for my marketing use cases or business use cases to really drive that tangible action? And what we see here at Adobe and in general as well, right, organizations are following an approach where they are looking at consolidating their data on warehouses or like Microsoft Fabric unified data analytics platform where you tap into the capabilities of an enterprise data warehouse, where you can bring in data consolidation across your organization and use that to look at historical transaction and interaction data to look at trends, leverage Power BI, as Richin was showing, to learn more about how that data is behaving and drive some of those analytical workflows from a data science and a modeling perspective. And at the same time, we're also looking at adoption of customer data platform, which is helping through the noise on unifying the customer data across multiple sources and utilize this rich information that you get through warehouses in a much more seamless way as you look at activation of these unified customer profiles across different channels for personalization. The benefit of doing that is that you have leverage across the entire scope of high value historical data for enhanced segmentation and personalization purposes, and I'm going to quickly show you how seamlessly and easy it is to do that. And also make data warehouse datasets accessible and actionable for marketing campaigns.

So just to ground it up, Real-Time CDP today provides both federation and streaming capabilities which add on an additional capabilities to meet the demands of our marketing and data engineering teams by providing, A, access to critical datasets that you have out there. It also allows us to do comprehensive support for use cases which require in the moment experiences where a consumer is walking into, let's say, a store, or interacting with your web property, or you want to go on Facebook and deliver a personalized ad and do that in a way which minimizes data movement and duplication because we all know it is not easy to move data around. So how do we work with that in a much more seamless fashion and utilize a single system for experience driven workflows and activate it across channels. So with that, I'll probably switch gears and actually show you a live demo of how this thing comes into action and really go through this experience there. What I'm going to do as a part of this demo is I'm going to work on a fictitious company called City Signal, where I'm going to use that, A, to create an audience essentially where Fiber is eligible at their home address, right? And I'm also going to do it in a way that I'm going to verify that this City Signal that they are an existing City Signal mobile customer, right? This data is residing in Fabric, right? So I'm going to do it by creating a warehouse native audience, and I'm going to bring that over to experience platform to be available for activation, and show you how that's possible with the integration that we just spoke of. And then, time permitting, we can jump into how you can then use that information to also drive real-time engagements as customers of City Signal interact with the brand on a web experience, so how can you then use that information to set up streaming or Edge audiences that you can leverage for streaming and Edge activation scenarios. So let's dive in. I'm going to quickly switch screens. All right. So now that we're back in experience platform, I'm going to quickly show how this integration comes to life and how seamless it is for you to integrate with Microsoft Fabric's unified data analytics platform. So with federated audience composition, one of the additional-- If you have that SKU add-on available, what you'll notice is you'll have a net new left NAV called federated data, and within that, you can go into federated databases to go ahead and establish a connection with whichever source of data that you want to connect with. We added capability for Microsoft Fabric support, and that was announced in February, which you can see. So now you can use this to connect to your Microsoft Fabric data platform and then use that for your audience capabilities that we're going to touch into right now. I'm not going to go ahead and finish the setup, but I just wanted to show you guys the screen that this is GA. It is live, and it is available to you as customers to use today. So it's not a pre GA or that kind of a setup in here. Now I'm going to quickly navigate into the Audiences tab, right, which we are pretty much all familiar with in terms of how that whole thing works. This is your central store for all audiences within Adobe Experience Platform. So you can see all your audiences listed out in here and what I can actually go ahead and do is I can access the federated compositions tab in here, right? Which is a net new tab that gets enabled once you have the FAC add-on, which then allows me to kind of basically use my connection that I just established with Microsoft Fabric to build my audiences as I go through this exercise. So let's take a look of how this all comes to life. I'm going to quickly pull up the slide so that we go through it side by side, I hope that's big enough, and as we go through this exercise in here. So I'm going to quickly go ahead and say, create composition...

And give this composition a name.

Summit 4:00pm Session.

Sorry, I'm not super creative with this.

And I'll select the back end data model, which I just tied to, hit Create, and I'm presented with the composition canvas. I can directly go in and say I want to build an audience, click Build Audience, select the schema that was exposed to me through my Fabric connection. So you can see here I have these different schemas that I have access to, and in the last exercise, Richin was showing you that he was working through the population, services, location, and the status schema as he was doing that consolidation. So I see that view as well as a marketing operations guy. But I'm not going to use the population and those ones because my use case demands me to work with the City Signal set, so I'm going to just stick to that for this demo purposes. And I'm going to quickly go ahead and say...

"Select the person test," for example, "Tell me who all have a mobile subscription for City Signal." And I can quickly go in here, select, "Give me a custom condition," and you can see the metadata is exposed. The underlying data still sits in your warehouse, and I can use that to create my audiences. So is a mobile subscriber is, in that as a field, select that value to true. If I want to see how many profiles qualify for it, I can do a query push down right in there and it'll give me number of profiles that qualify for it and I can start building my audience dataset from there.

I can go ahead and hit Confirm, or in this case, I want to build in more, so I'm going to add in a custom condition around, and you could see that for a brief minute, 20,000 profiles qualified before I moved in. Okay. So I'm going to quickly go ahead and see that, okay, now that I've checked, I need to go back from people to household to see which households have Fiber eligibility. So I'm going to utilize the relational links that we are all used to as we work through relational databases to navigate from people to household and say, check for "Is eligible for Fiber" condition to be true.

Hit calculate, it'll give me the result. Confirm, meets my audience criteria. That's all that I wanted to check on. So now I have my audience definition built in, which is a mobile subscriber, is eligible for Fiber, and I can quickly go ahead and do save audience. There are a bunch of other activities. I don't think we have enough time to go through all, so I'm going to quickly go ahead and just save this audience.

Give it a name, Summit 4:00pm, Audience, and I don't really need to bring in mobile subscription details or Fiber eligibility detail to AEP, right? I'm not going to use that for personalization. All I need is give me the primary person's name and the email address so that I can reach out to him in my email communication or where I want to send that over. So I'm going to quickly go ahead and do that.

Select the Name, select the Email.

And I'm not sure who asked me the question, "Where is the XDM schema created?" This is all happening while we are doing this. As we do that, you can select the name space. Again, all your available name space are available in here. Hit select, Save, start, and as soon as you start, what you'll see, it will create this audience for me behind the scenes in AEP, and this audience will be available for me in audience portal for downstream activation. So if I go into Audiences...

Browse...

And I will see the Summit 4:00pm Audience is available for me for downstream activation. So that's a quick rundown of how the integration works seamlessly and how Microsoft Fabric talks to Adobe and Experience Platform. And with that, I would like to bring back Richin on the stage so that we can go through the meaning of the slides. Yes. So that brings us to the end of the talk. If there's anything we would like you to take away from this session, it is how Fabric is the unified platform that helps you bring data from different sources in a unified manner. Removing data silos, and helps you to unlock insights and apply AI on top of your data. And through some of the capabilities that you saw through Copilot and AI skill, we are really democratizing AI access through Fabric.

And, Abhijit, you want to talk about the details? Yeah. The fourth point, right, I think, really excited to see this integration comes to life. Just want to kind of add on, customers do expect to see and organizations need to deliver, in the moment experiences. I know, I didn't get a chance to show that view in the live demo but you can use these audience membership details to also drive an influencer in-the-moment experiences, not just brand initiated conversations like we were doing. And then there's this whole sentiment or feedback that is out there that data as a whole needs to be addressed holistically, keeping the customer's business priorities in mind, ensuring that it's future proof, right? So this integration allows to be future proof as you work through your CDP investments, as you look through your data investments in terms of bringing the best of both worlds together so that you can start leveraging the platform to its fullest capabilities. Yeah. And just quickly, I know slides are a little out of order, but just call to action. If you want to try out Fabric today, we offer 60-day free trial. No strings attached, no credit card required. Just go to app.fabric.microsoft.com or that link, the short link, and you are all set for 60 days. And you can also try the Fabric Copilot experience with that trial version. It's not limiting. We provide a pretty beefy F64 capacity. That's $17,000 per month. So it's a pretty big SKU that you can use and get value out of it to see for yourself what Fabric can do for you. And here's some documentation on some of the things that I showed around AI skills. And if you have not already, just visit our booths and learn more about other Microsoft products.

Thank you. Thank you, all. Thank you. Thank you, Richin.

[Music]

In-Person On-Demand Session

Turning Insights to Gold with Federated Audience Composition - S712

Sign in
ON DEMAND

Closed captions in English can be accessed in the video player.

Share this page

Speakers

Featured Products

Session Resources

No resources available for this session

About the Session

In today’s dynamic marketing landscape, understanding and engaging with your audience on a deeper level is crucial. Explore how Microsoft Fabric, a powerful data integration and analytics platform, can revolutionize your marketing strategies by enabling seamless audience composition across diverse data sources. Discover how federated audience composition unifies fragmented data, creating a holistic view of customer behavior and preferences. Delve into practical applications that showcase how to leverage Microsoft Fabric and Adobe Real-Tine CDP to enhance personalization, drive targeted campaigns, and ultimately boost customer engagement and loyalty.

By clicking add to schedule, I agree the Adobe family of companies may share my information with Microsoft to contact me about this session.

Industry: Advertising/Publishing, Consulting/Agency, High Tech

Technical Level: General Audience

Track: Workflow and Planning, Content Management, Generative AI

Presentation Style: Tips and Tricks, Value Realization

Audience: Advertiser, Digital Marketer, IT Executive, Marketing Executive, Web Marketer, Project/Program Manager, Marketing Practitioner, Marketing Analyst, Marketing Operations , Business Decision Maker, Content Manager, Email Manager, Marketing Technologist, Social Strategist

This content is copyrighted by Adobe Inc. Any recording and posting of this content is strictly prohibited.


By accessing resources linked on this page ("Session Resources"), you agree that 1. Resources are Sample Files per our Terms of Use and 2. you will use Session Resources solely as directed by the applicable speaker.

New release

Agentic AI at Adobe

Give your teams the productivity partner and always-on insights they need to deliver true personalization at scale with Adobe Experience Platform Agent Orchestrator.