The uncomfortable truth about why 79% of organisations are struggling with AI governance, and why the solutions that worked for everything else won't work here
Recently, our AI Expert, Mulya Van Roon, had similar conversations with executives that went like:
Executive: "We have strong governance frameworks. We've managed IT systems, data privacy, and regulatory compliance for decades. Why is AI governance so different?"
Mulya: "Because AI is fundamentally different from anything you've governed before. And the approaches that worked for traditional systems will not only fail for AI, they'll give you a false sense of security whilst your actual risk exposure grows."
Here's the reality: Only 21% of organisations have systematic AI governance in place. The other 79% are trying to apply traditional governance frameworks to AI systems -and discovering, often too late, that those frameworks were built for a different world.
This isn't a failure of effort. It's a failure of understanding.
The five reasons traditional governance fails for AI
1. You're using static frameworks for dynamic systems
Traditional governance assumes systems are static and deterministic. You validate software once, document its behaviour, and as long as the code doesn't change, the behaviour doesn't change.
AI systems shatter this assumption.
AI models learn from data. They adapt. They evolve. A model validated in January may behave differently in June because it's been exposed to new data, new patterns, and new edge cases.
The failure mode: Organisations validate AI systems using traditional methods, approve them, deploy them, and then discover months later that the system is making decisions it was never designed to make.
What you need instead: Continuous monitoring and validation frameworks. Not "validate and forget," but "validate and watch."
2. You're applying checkbox compliance to probabilistic systems
Traditional compliance is binary. Either you meet the requirement, or you don't.
AI systems are probabilistic, not deterministic.
An AI model doesn't give you the "right answer"; it gives you the most probable answer based on patterns in its training data. It might be 95% accurate. Or 87%. Or 73%. And that accuracy might vary across different subpopulations, different contexts, and different time periods.
The failure mode: Organisations apply traditional testing approaches to AI, get passing results in controlled environments, and then deploy to production, where the real-world distribution is different.
What you need instead: Risk-based governance frameworks that account for uncertainty. Acceptance criteria based on performance thresholds, not binary pass/fail. Ongoing performance monitoring that tracks accuracy, bias, and drift over time.
3. You're designing human oversight for systems that operate at an inhuman scale
Traditional governance assumes humans can meaningfully review decisions. An auditor can review a sample of transactions. A manager can approve high-risk actions.
AI systems make millions of decisions per second.
A credit scoring AI might evaluate 10,000 applications per day. A content moderation AI might review 100 million pieces of content per hour. Human review of even a meaningful sample is impossible.
The failure mode: Organisations implement "human in the loop" governance for AI systems, discover that humans can only review 0.01% of decisions, and either create massive bottlenecks or create theatre, humans rubber-stamping AI decisions they can't actually evaluate.
What you need instead: Oversight by exception. Statistically meaningful sampling strategies. Aggregate monitoring that tracks population-level metrics. Meta-oversight that monitors the AI's decision patterns rather than individual decisions.
4. You're using documentation designed for explainable systems
Traditional governance relies heavily on documentation that shows "if X, then Y, because Z."
AI systems, especially modern deep learning models, are often opaque.
You can document the training data, the model architecture, and the validation results. But you often cannot document why the model makes specific decisions. The decision logic is distributed across millions or billions of parameters.
The failure mode: Organisations create extensive documentation for AI systems that looks impressive but doesn't actually explain how the system makes decisions. When regulators ask "why did your AI deny this application?", the documentation provides no meaningful answer.
What you need instead: Different documentation approaches for AI. Focus on documenting training data characteristics, validation methodologies, performance metrics, known limitations, and monitoring approaches rather than decision logic.
5. You're managing simple systems in a complex supply chain
Traditional IT governance deals with relatively simple supply chains. You have vendors who provide software. The boundaries are clear.
AI involves complex, multi-layered supply chains.
Consider a typical AI system:
Foundation models from providers like OpenAI or Anthropic
Vector databases from vendors
Fine-tuning data from multiple sources
Orchestration frameworks from open source projects
Embedding models from different providers
- Deployment infrastructure from cloud providers
Each layer has its own risks. The foundation model provider might change the model. The data sources might introduce bias. An open source component might have security vulnerabilities.
The failure mode: Organisations govern the AI system they deploy, but don't govern the supply chain that the system depends on.
What you need instead: Supply chain governance for AI. Vendor management that includes model performance and bias testing. Data governance that tracks provenance across the entire pipeline.
The dangerous illusion of governance theatre
Here's what makes this particularly dangerous: organisations think they have AI governance when they actually have governance theatre. They've taken their existing IT governance framework, added "AI" to the policy title, created an "AI ethics committee" that meets quarterly, and required teams to fill out an "AI impact assessment" form.
Do you recognise this pattern?
The tragedy is that governance theatre creates a false sense of security. Executives believe they're managing AI risk. Boards believe they have oversight. But the actual AI systems are deployed without meaningful governance, and the organisation discovers this only when something goes wrong.
What AI governance requires
Accept that AI governance is a new discipline
Stop trying to retrofit traditional governance frameworks. AI governance is not IT governance plus ethics. It's a distinct discipline that requires different risk frameworks, different oversight mechanisms, different documentation approaches, and different organisational structures.
Build for continuous adaptation, not static compliance
Traditional governance is about achieving a compliant state and maintaining it. AI governance is about continuous adaptation to changing systems, changing risks, and changing contexts. This requires continuous monitoring of AI system performance, drift detection, incident response capabilities, regular reassessment of risk levels, and feedback loops that improve governance based on operational experience.
Design oversight for AI scale and speed
You cannot review every AI decision. You need oversight mechanisms designed for systems that operate at scales and speeds that humans cannot match.
This requires sampling strategies that are statistically meaningful, aggregate monitoring that tracks population-level metrics, exception-based review, and escalation mechanisms for high-risk cases.
Embrace risk-based governance
Not all AI systems pose the same risk. An AI that recommends films requires different governance than an AI that approves loans or diagnoses diseases.
This requires risk classification frameworks, tiered governance processes where high-risk systems get rigorous review and low-risk systems get streamlined approval, and regular risk reassessment.
Build capability, not just policy
The biggest gap in AI governance isn't policy -it's capability. Organisations have AI principles. What they lack is people who know how to apply those principles in practice.
This requires role-specific training, practical tools and templates, communities of practice, and embedded governance expertise in teams.
Three questions every executive should ask this week
Question 1:
"Can we list every AI system currently in production across our organisation, along with its risk level and governance status?"
If the answer is no, you have a fundamental visibility problem. You cannot govern what you cannot see.
Question 2:
"When an AI system we deployed six months ago makes a decision today, how do we know it's making that decision for the right reasons?"
If the answer is "we validated it before deployment," you have a monitoring problem. AI systems change over time.
Question 3:
"If a regulator asked us to explain why our AI made a specific decision, could we provide a meaningful answer within 24 hours?"
If the answer is no, you have an explainability and documentation problem.
If you can't answer "yes" confidently to all three questions, you're in the 79% without systematic AI governance.
Why is the window closing
There's a dangerous temptation to delay AI governance. "We're still experimenting." "We'll build governance when we scale."
This thinking is backwards. The time to build governance is before AI becomes critical, not after.
Regulatory pressure is intensifying. The EU AI Act is in force with significant penalties for non-compliance. AI is scaling faster than governance. Incidents are becoming more visible and costly. Organisations with mature AI governance are moving faster, not slower -they've built the capability to deploy AI at scale with confidence.
The organisations that build AI governance proactively will have a significant advantage. The organisations that wait will be building governance reactively, under pressure, often after an incident.
The choice: lead or follow
AI governance is no longer optional. The question isn't whether you'll build it, but whether you'll build it proactively or reactively.
Organisations that lead are building governance now, before they're forced to. They're treating it as a strategic capability that enables innovation whilst managing risk.
Organisations that follow are waiting for clearer regulations, for competitive pressure, for an incident that forces action. They're accumulating risk whilst telling themselves they're moving fast.
The gap between leaders and followers is widening. Organisations that build AI governance proactively will have a 2-3-year advantage over those that wait.
We want to hear from you:
- Where is your organisation on the AI governance maturity journey?
- What's working in your organisation that others could learn from?
The future of AI depends on getting governance right. Not governance theater. Not checkbox compliance. But real, practical, effective governance that enables innovation while managing risk responsibly. We're learning fast, and the practitioners who are implementing governance in real organisations are developing insights that can help others avoid costly mistakes.
Let's build that future together.
Oliver Patel, AIGP, CIPP/E, MSc - Enterprise AI Governance: Practical and actionable insights to scale AI governance (oliverpatel.substack.com)
James Kavanagh - Doing AI Governance: Proven, practical guidance to learn and implement high-integrity AI Governance (blog.aicareer.pro)
