Learning Hub Module 02
🏛 AI Governance & Strategy

Build governance that
enables AI — not one
that blocks it.

A practical framework for building AI governance from scratch — covering data ownership, risk, policy, and how to get the whole organisation moving in the same direction.

Reading time~18 min
LevelAll levels
Module02 of 06
ByAna Rubio Herrera
01

Why most AI governance fails before it starts

Every organisation I've worked with has, at some point, tried to govern AI by saying no. A committee forms. A policy document gets drafted. It lists what people can't do with AI tools. And then — nothing changes, because the people who actually need to use AI simply work around the policy.

This is the fundamental failure mode: governance designed to control risk rather than enable value. It treats AI like a threat to be managed rather than a capability to be built.

The organisations that get governance right start from a different premise: the goal is not to stop people using AI badly — it's to create the conditions for people to use AI well. That means clear ownership, practical policies, and a culture where people know what good looks like.

⚠ The most common mistake

Writing a governance policy without first understanding how AI is already being used in your organisation. By the time the policy arrives, people have already formed habits — good and bad. Start with discovery, not documentation.

The second common failure is treating AI governance as a one-time project. It's not. The technology moves too fast, the use cases multiply too quickly, and the regulatory landscape is shifting. Governance needs to be a living system — reviewed, updated, and owned by someone with real authority.

Key takeaway
  • Effective governance enables AI use — it doesn't block it. Design for the best case, protect against the worst.
  • Before writing any policy, audit what's already happening. You'll find GenAI tools are already in use across your organisation, whether you know it or not.
  • Governance is a system, not a document. It needs ownership, iteration, and enforcement — not just a PDF on the intranet.
02

The four pillars of AI governance

A robust AI governance framework rests on four pillars. Miss any one of them and the whole structure becomes unstable — either too restrictive, too permissive, or simply ignored.

🗂
Data ownership & quality
Who owns which data? Who can use it, and for what? What's the quality baseline before AI touches it? Garbage in, garbage out — governance starts with the data layer.
⚖️
Risk & compliance
What can go wrong, and how bad is it? Mapping AI risk by use case — from low-stakes drafting to high-stakes automated decisions — determines how much oversight each use case needs.
📋
Policy & standards
Clear, practical rules that people can actually follow. Which tools are approved? What data can go in? Who reviews AI outputs before they're used? Written for humans, not lawyers.
🎓
Capability & culture
Governance without capability is theatre. People need to understand AI well enough to use it responsibly. Training, champions, and visible leadership behaviour matter as much as policy.

The most commonly neglected pillar is the last one. Organisations invest in policy and risk frameworks, but forget that governance only works if the people it governs understand why it exists — and have the skills to operate within it intelligently.

Key takeaway
  • All four pillars are required. A risk framework without data ownership is incomplete. Policy without capability training won't be followed.
  • Assess your organisation against each pillar before deciding where to start — the weakest pillar is your highest priority.
  • Culture and capability are the hardest to build and the easiest to underestimate. Invest there early.
03

Building your framework — step by step

Here is the sequence I've used and refined across multiple organisations. It's designed to be practical, not theoretical — each step produces something tangible that the next step builds on.

1
Audit current AI use
Before writing a single policy, find out what's already happening. Survey teams, run workshops, or simply ask: which AI tools are you using today, for what tasks, and with what data? You will be surprised. Most organisations find that GenAI adoption is already significant — just uncoordinated.
2
Define your AI use case inventory
Catalogue every AI use case in the organisation — current and planned. For each one, capture: what it does, what data it uses, who owns it, and what the consequences of failure are. This inventory becomes the backbone of your risk framework.
3
Map and tier your risks
Not all AI use cases carry the same risk. Tier them by impact and reversibility. A GenAI tool that drafts internal emails is low risk. A model that automatically approves insurance claims is high risk. Different tiers need different oversight, approval processes, and monitoring requirements.
4
Establish data governance rules
Define clearly: which data can be used with which AI tools. Classify your data (public, internal, confidential, personal) and map it to approved tools. No confidential or personal data in consumer AI tools without a Data Processing Agreement. This is non-negotiable, especially under GDPR.
5
Write your AI policy — simply
One page if possible. Two at most. Cover: approved tools, data handling rules, human review requirements, and escalation paths. If people need a lawyer to understand it, rewrite it. The goal is a document people actually read and use.
6
Assign ownership
Governance without an owner is a document, not a system. Assign a named person or function responsible for: maintaining the policy, reviewing new use cases, monitoring compliance, and staying current on regulation. In smaller organisations this might be the CDO or a senior data leader. In larger ones, consider an AI governance committee.
7
Train, embed, iterate
Launch with training — not just a policy email. Run workshops. Create champions in each team. And commit to a review cycle: AI governance needs to be revisited at least every six months as the technology and regulatory environment evolve.
Key takeaway
  • Start with the audit — you can't govern what you don't know exists.
  • Sequence matters: use case inventory → risk tiering → data rules → policy → ownership → training. Don't skip steps.
  • Simple, clear policy beats comprehensive, complex policy every time. Aim for one page.
04

Risk mapping — a practical guide

One of the most useful outputs of the governance process is a risk map: a clear view of which AI use cases in your organisation carry which level of risk, and what level of oversight they require. Here's a practical framework for building one.

Use case type Risk level Oversight required Example
Internal drafting & communication Low Human review before send GenAI drafts a project status report
Research & synthesis Low Fact-check key claims Summarising a regulatory document
Customer-facing content Medium Approval before publication AI-generated marketing copy
Data analysis & reporting Medium Validation against source data AI-generated financial summary
Automated customer interaction High Full audit trail + escalation path AI chatbot handling complaints
Automated decisions (scoring, approval) High Human-in-the-loop + explainability AI model approving insurance claims
Personal data processing High DPA required + DPIA assessment AI processing employee or customer PII

The key principle: risk level determines oversight level, not a blanket policy. Treating all AI use cases as equally risky creates bureaucracy that slows down low-risk work without meaningfully protecting against high-risk failure.

EU AI Act — what you need to know

The EU AI Act (in force from 2024, with phased implementation) introduces mandatory requirements for high-risk AI systems — including those used in employment, credit, insurance, and essential services. If your organisation operates in the EU, you need to map your AI use cases against these categories now. Ignorance of the classification is not a defence.

Key takeaway
  • Risk is not binary — tier your use cases and calibrate oversight accordingly.
  • Automated decisions affecting people (credit, insurance, employment) are high-risk by definition and require the most rigorous governance.
  • If you operate in the EU, start mapping against EU AI Act categories now — compliance timelines are already running.
05

Real examples from the field

Governance looks different in different organisational contexts. Here's what I've seen work — and what I've seen fail.

📍 SDG Group — Insurance sector, multi-country

In a complex multi-country insurance transformation program, the governance challenge was that different countries had different regulatory requirements — but the AI tools being deployed were the same. The solution was a tiered governance model: a core framework applied everywhere, with country-specific overlays for regulatory requirements. The key to making it work was involving country leads in the design process, not just handing down a policy from the centre. Ownership at the local level made the difference between compliance and genuine adoption.

📍 United Nations — Geneva

The UN context brought a different challenge: an organisation with extremely sensitive data, a global mandate, and complex information classification requirements. The approach here was conservative by design — GenAI tools were approved only for clearly internal, non-sensitive use cases, with explicit prohibition on inputting any classified or personal data. This wasn't the most enabling governance framework, but it was the right one for the risk context. The lesson: governance should match your organisation's risk tolerance, not a generic template.

📍 datalitiks — Startup context

At a startup, formal governance feels like overhead — and in the early days, it is. But even with a small team, two things proved essential from the start: a clear rule about what client data could and couldn't enter AI tools, and a habit of always reviewing AI outputs before they went to clients. These two habits, established early, prevented several potential incidents and became part of the company culture. Governance at startup scale doesn't need a committee — it needs two or three clear, non-negotiable rules that everyone understands.

Key takeaway
  • There is no one-size-fits-all governance framework. Design for your organisation's risk context, size, and regulatory environment.
  • Local ownership is essential in multi-country or multi-team environments — central policy without local buy-in doesn't hold.
  • Even at startup scale, two or three clear rules applied consistently beat a comprehensive policy that nobody follows.
06

Your action plan — this month

Governance is one of those areas where perfect is the enemy of good. The organisations that wait until they have a comprehensive framework in place before doing anything are the ones that end up governing a mess they didn't see coming. Start now, start simple, improve as you go.

Action plan — this month
  • Week 1 — Audit. Ask every team lead: what AI tools is your team using, for what, and with what data? Compile the answers into a simple inventory. No judgment, just information.
  • Week 2 — Tier your risks. Take your use case inventory and classify each one as low, medium, or high risk using the framework above. Identify the two or three highest-risk use cases that need immediate attention.
  • Week 3 — Draft your data rules. Write a one-page document that classifies your data types and maps them to approved AI tools. Share it with legal and your data protection officer for review.
  • Week 4 — Assign ownership and communicate. Name the person or function responsible for AI governance. Communicate the basic rules to the organisation — not as a restriction, but as an enabler: "here's how we use AI well."
  • Set a review date. Put a governance review in the calendar for six months from now. The framework will need to evolve — build the habit of reviewing it from the start.

The goal at the end of this month is not a perfect governance framework. It's a known baseline — you know what's happening, you've established the most critical rules, and someone owns it. Everything else can be built iteratively from there.

Up next
Module 03: Agentic AI & Automation
Module 03: Agentic AI & Automation