Generative AI — how can you trust what you create?
Generative AI solutions are incredibly powerful tools that are trained to interpret data and deliver high-quality, relevant responses to the prompts they are asked. However, relevance does not always equate to complete accuracy. In some cases, generated responses are acceptable but can deliver slightly inaccurate content that could result in liability for your brand. So what are some key considerations in creating generative AI that you can trust?
Be diligent about quality data inputs
In the workplace, we see generative AI as a tool that can help drive efficiency and productivity. However, the less "hands on” we plan to be with the tools, the more we must rely on well-trained models to deliver accurate insights based on the knowledge we have provided to them. The challenge is, we all have high expectations for functionality and accuracy of the technology we use at work. When using generative AI, we expect our enterprise-grade models to have been designed to understand the core aspects of our business and how we communicate. But you can only have accurate generated content if you are diligent about quality data and inputs.
As the volume of information added to the models increases, so does the perceived degree of trust we have in their outputs. Structure your data inputs with accurate and rich tagging to improve the tuning and training of your model. By building in pre-configured prompts using templates, you can create the guardrails needed to guide the models and produce a more accurate output. Thus, for enterprise use, how you select, tune, train, and prompt models make all the difference in your outputs — and your degree of trust in that output.
Thoughtfully consider your model
Different generative models respond differently to the same requests. In some cases, they may even present outputs in different styles and tones. Some have more witty and casual outputs, some are more factual and serious, and some models — like Adobe Firefly — are designed to produce outputs that are commercially safe. For visual and video output, some models are trained on specific licensed content while others leverage more open-sourced materials, and as a result, there are differing outputs based on the licensed or unlicensed source data. This means the selection of a model does influence the output and should be thoughtfully considered.
Any model used by marketers must be brand-aware and compliant. To do this, you need to compile, structure, digitize, and scan a range of information that will teach the model all aspects of your brand, so that it properly reflects the values, tone, and voice. By tuning and training, as well as having task-specific prompt frameworks, you can produce your desired outcome in a more efficient manner and reduce the risk of misinformation — leading to a higher degree of truth.
Establish methods to provide feedback
A generative model is a unique technology that is constantly learning based on inputs and on guidance from users. As you interact with your model, and provide feedback on the results that you receive, the more precise the output. Establishing methods to provide feedback within the user interface provides a high degree of value and will make your model more reliable.
It’s also important to refresh your data regularly. Data does become stale and dated, and if your models aren't tended to, over time the level of accuracy and trust degrades. Build in a governance function to continually refresh models with the most current and accurate information, designating data stewards who can partner with relevant marketing users to deliver the highest level of performance.
The way you use generative AI also has an impact on the degree of trust. If you are summarizing content by tightly controlling the source of the data the model is using, this is less risky and will be more accurate than using generative models that pull from various data sources and compile them together, such as content atomization or contextualization models.
Content atomization allows you to break a longer piece of content into smaller, more strategic pieces. For example, a white paper could be used to generate a few distinct blog posts. Content contextualization leverages existing content but is tailored to a targeted audience or role. In this case, an existing white paper is tailored to a particular geography or personalized for a specific industry. These types of generative models enable hyper-personalization and more effective targeting — however, because the data sets are larger, pulling in things like personas and geographical information, the transformation of the content may not be fully accurate. These models require human oversight to review the outputs until the model has been tuned and trained for proficiency.
Create trust with every personalized interaction
Marketing is transforming with the introduction of generative AI. As more people use generative models in everyday activities, our trust in it improves. And similarly, as we build out brand-aware and well-governed models in the workplace, the efficacy and confidence in its creative capability expands beyond idea generation to enable personalized messaging across digital channels.
Brand trust has always been a hallmark of successful companies. And in this new era, it is essential that brands take the necessary steps to build trustworthy generative AI. If you can establish disciplined approaches and thoughtful execution, you will be able to trust what you create — and create trust with every personalized interaction.
To learn more about IBM’s point of view on building generative AI you can trust, join us at Adobe Summit and register for one of our sessions and roundtables.