Why public GPTs fail at structured enterprise content — and what to do instead.

Saibal Bhattacharjee

08-20-2025

Professional using her smartphone, with floating UI cards showing multi-format publishing options and Carvelo's welcome page.

Many organisations are embracing generative AI to accelerate content creation. Tools like ChatGPT, Gemini, Claude and LLaMA offer speed and linguistic fluency. However, when applied to structured or compliance-sensitive content, they often introduce serious risks.

The issue isn’t their writing ability — it’s their inability to follow structured rules essential to enterprise content operations. Public GPTs aren’t trained to honour your schema, metadata logic or publishing framework. Instead, they replicate patterns from public data — and while these patterns may appear correct — they can fail validation, localisation or compliance checks.

Gartner predicts that “at least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value”.

Rita Sallam, distinguished VP analyst at Gartner, adds: “After last year's hype, executives are impatient to see returns on GenAI investments, yet organisations are struggling to prove and realise value. As the scope of initiatives widens, the financial burden of developing and deploying GenAI models is increasingly felt.”

These challenges signal a pressing need for AI strategies that are not just innovative, but operationally grounded. Speed without structure creates risk. Governed AI is the only scalable path forward.

The hidden risk: GPT outputs may look fluent but fail validation.

Public GPTs are trained on vast open-web content, including wikis, forums and casual tutorials. As a result, they often:

These issues can result in:

Why public AI tools can’t follow your specific enterprise rules.

“The problem isn’t that public GPTs can’t write, they can. The problem is that they don’t know how to write within your system,” says Nathan Gilmour, CTO of Writemore AI. “These models weren’t trained to follow your schema, your metadata rules or your compliance boundaries. That means they’ll often produce content that looks right but breaks when it matters. It could be at validation, in translation, failure to publish or during audit.”

Gilmour’s point underscores a deeper operational risk — public AI models are optimised for linguistic fluency, not structural fidelity. In structured environments like Adobe Experience Manager Guides, where rule adherence is foundational, organisations need AI systems that validate against internal schemas in real-time and operate within governed workflows — rather than generic tools trained on open web data.

Case example: Why rule fidelity matters and what happens without it.

In a controlled scenario, CCMS Kickstart prompted a GPT model to generate content using strict documentation constraints. A validating schema was supplied and the model was explicitly instructed to follow the expected structure, element order and metadata rules.

Despite those clear inputs, the model reverted to statistically familiar patterns drawn from public training data — patterns that looked polished but failed schema validation. If published, this content would have introduced compliance risks, broken publishing workflows and created a costly rework cycle. Fortunately, CCMS Kickstart’s team caught the issue before deployment through structured testing and schema-based automation review.

This underscores a core reality: GPT tools optimise for fluency, not fidelity. To securely rework content to a corporate standard is why CCMS Kickstart works with enterprise teams to design, test and operationalise AI models within governed content systems. Smart businesses ensure accuracy, compliance and scalability from day one.

Real-world examples with measurable consequences.

Public GPT failures are not just theoretical. They have already resulted in legal, operational and reputational consequences for major organisations. Unvalidated and unconstrained AI outputs can lead to measurable business impact, especially when structure and oversight are absent.

These cases reflect the same principle — organisations are accountable for outputs, regardless of whether they were AI-generated.

Governments are formalising expectations.

Regulatory bodies around the world are moving quickly to define how AI must be used, tracked and validated — especially in high-stakes industries. These initiatives emphasise that content created by AI is not exempt from oversight. Whether the output is for marketing, legal or technical documentation — organisations remain accountable for what is produced.

AI content must do more than appear correct. It must be provably compliant.

What enterprises need in an AI-enabled content environment.

To meet governance, brand and operational standards, enterprise AI content systems should offer:

Structured models must reflect internal business rules.

While some standardised content models provide a strong foundation, they do not enforce the internal business rules every organisation depends on. Those include:

To maintain quality and consistency, organisations must define a structure that reflects how they publish, translate and govern content across teams, channels and markets.

As Sarah O’keefe, Founder and CEO of Scriptorium, notes, “Generative AI is promoted as a way to increase content velocity by automating content creation. Ironically, though, the AI engines will perform best if they are fed accurate, highly structured, semantically rich information.” Many organisations are now pursuing private AI models trained on curated internal content for greater control, better results and fewer risks.

Conclusion: Enterprise AI should reinforce, not replace, structure.

Public GPT tools provide speed and linguistic fluency. But without rule enforcement, they risk producing outputs that are structurally invalid and operationally unsound.

Enterprises that use tools like Adobe Experience Manager Guides or comparable structured authoring environments can accelerate content velocity — provided the AI operates within a governed, schema-aligned framework.

Fluency matters. But alignment with organisational rules, validation and traceability matter more. The enterprises that avoid the pitfalls Gartner warns about — poor data, unchecked risk, rising costs — will be the ones that build governance into their AI from the start. Structured content environments like Adobe Experience Manager Guides can help make that a reality.

Want to learn more about the AI initiatives in Adobe Experience Manager Guides? Drop us a note at techcomm@adobe.com.

Saibal Bhattacharjee is the director of product marketing for the Digital Advertising, Learning & Publishing Business Unit at Adobe. Saibal has been with Adobe for 15 years and is currently in charge of global GTM and business strategy for a diverse product portfolio in Adobe — ranging from market-leading cloud-native component content management system (Adobe Experience Manager Guides), advertising & subscription monetisation products for connected multiscreen TV platforms (Adobe Pass), to content authoring and publishing desktop apps (Adobe FrameMaker, Adobe RoboHelp).

With more than 21 years of experience in the technology sector, Saibal is a high impact marketing, strategy and product executive with a passion for tackling the most complex challenges in enterprise software and turning solutions into scalable works of enterprise-grade art. He has successfully built, mentored and managed global GTM teams spanning India, US, UK, Germany and Japan for more than a decade. Saibal holds a B.E. degree from Jadavpur University, Kolkata and an M.B.A. degree from the Faculty of Management Studies, University of Delhi.

https://business.adobe.com/fragments/resources/cards/thank-you-collections/experience-manager-guides