Building AI Governance That Teams Actually Use (part 1 of 3 series)


VIEW IN BROWSER

Riley Coleman

Feb 2025

Building AI Governance That Teams Actually Use (1/3)

In this issue

Dusty Policy to Decision Support


3 part framework to co-designing AI governance


Step 1 : Planning Phase


Common Pitfalls


Downloadable Templates

Last month, I spoke to an Executive about their failed AI governance framework gathering dust in a shared drive. Their legal expert and outside consultants spent months crafting detailed policies.

The result? A 38-page document that, while comprehensive, is so technical that it goes unread by those who need it most.

The problem isn't that these frameworks are badly designed. It's that they're often designed for an ideal world but we don't live in that world. We live in a world of tight deadlines...

And here's the kicker - AI is moving at a blistering pace, with new ethical challenges emerging almost weekly. What was best practice last month might be outdated today.

This rapid change means that static frameworks, no matter how well-crafted, are doomed. If the people on the ground—those making daily decisions about AI—aren't involved, they can't spot emerging ethical issues as they arise. They become historical documents rather than living guides.


The Human-Centred Difference

This is where human-centred design comes in. Instead of starting with policies, we start with people. Instead of asking "What should the rules be?", we ask "How can we help teams make better decisions about AI?"

Three mindset shifts have transformed how I approach AI governance:

From Policies to People

Stop thinking about governance as a set of rules to follow. Start thinking about it as a set of tools to help people make better decisions. This means understanding:

  • What decisions teams actually need to make
  • What information they need in the moment
  • How they prefer to access and use that information

From Guidelines to Conversations

The best governance framework I’ve seen wasn't a document. It was a series of structured conversations. It worked because it:

  • Created space for honest discussion about AI risks
  • Acknowledged uncertainty and complexity
  • Helped teams learn from each other's experiences

From Frameworks to Co-creation

Want people to use your governance framework? Involve them in creating it. When teams help build something, they're much more likely to use it.


A Three-Part Journey to Human-Centred AI Governance

This is the first of three articles that will help you create a governance framework that people will want to use. Here's what we'll cover:

Part 1 (First 30 Days): Building Foundations

  • Mapping the human landscape
  • Understanding current practices
  • Gathering insights and materials
  • Planning co-design workshops

Part 2 (60 Days): Co-Creating & Testing

  • Running co-design workshops
  • Building initial framework
  • Testing with pilot teams
  • Gathering real-world feedback

Part 3 (90 Days): Scaling & Embedding

  • Refining based on pilot feedback
  • Rolling out to wider organisation
  • Training governance champions
  • Establishing feedback loops

In this first article, we'll focus on the critical foundation phase - the first 30 days. This is where we do the groundwork that will set us up for successful co-creation and implementation.


Your First 30 Days:

Building Foundations


Before we dive into the specifics, let's understand what we need to achieve in these first 30 days. Y

our goal is to understand your organisation's AI governance and build trust for effective co-creation.

This means figuring out who should be involved. We need to understand their challenges and needs by doing interviews. We’ll also gather existing materials and processes. Then, we can use everything to plan effective co-design workshops for the next phase.

Think of it as creating your blueprint for co-creation. You're not building the framework yet. First, you need to understand who should be in the room, what challenges they face, and how to bring them together for the next phase.

Here's exactly what you need to do:

1. Map Your Stakeholders

First, identify who needs to be involved. You'll need a diverse representation across departments, genders, cultural perspectives, seniority levels and domain expertise.


Design & UX Research

Role & Value:

  • Champion user needs and experiences
  • Ensure AI interactions are understandable
  • Map user journeys and touchpoints

Key Perspective:

  • How will users understand and interact with AI?
  • Where might confusion or friction occur?
  • How to make AI decisions transparent?

Product Management

Role & Value:

  • Own feature development and roadmap
  • Balance business goals with ethical considerations
  • Coordinate cross-functional implementation

Key Perspective:

  • How does AI serve business and user needs?
  • What trade-offs need to be managed?
  • How to measure success ethically?

Legal & Compliance

Role & Value:

  • Ensure regulatory compliance
  • Interpret emerging AI laws
  • Advise on risk management

Key Perspective:

  • What regulations apply?
  • How to document decisions?
  • What liability risks exist?

Marketing & GTM

Role & Value:

  • Communicate AI capabilities accurately
  • Manage user expectations
  • Build trust through transparency

Key Perspective:

  • How to explain AI to users?
  • What promises can we make?
  • How to build user trust?

Engineering & Data Science

Role & Value:

  • Understand technical capabilities and limitations
  • Implement ethical considerations in code
  • Identify technical risks

Key Perspective:

  • What's technically possible?
  • Where might bias occur?
  • How to validate AI decisions?

Operations

Role & Value:

  • Implement governance in daily workflows
  • Manage documentation and processes
  • Monitor compliance

Key Perspective:

  • How to make governance practical?
  • What tools are needed?
  • How to track adherence?

2. Conduct Discovery Interviews

Spend 30 minutes with at least 1 key stakeholders per department. Your goal? Understand:

  • Current AI-related decisions they're making
  • Pain points in existing processes
  • What would actually help them
  • Their concerns and hopes about AI

3. Gather Existing Materials

  • Current policies or guidelines
  • Decision-making processes
  • Known pain points
  • Success stories
  • Failed attempts

What Success Looks Like at 30 Days

By the end of your first month, you should have:

  • A clear stakeholder map
  • Rich insights from interviews
  • Collection of existing materials
  • Identified quick wins
  • Built initial trust and buy-in

Common Pitfalls to Avoid in Your First 30 Days

1. Over-Focusing on Technical Stakeholders

  • Don't limit conversations to engineering and data science
  • Include voices from customer support, sales, and other front-line teams
  • Remember: AI governance affects everyone who touches the product

2. Rushing to Solutions

  • Avoid jumping straight to writing policies
  • Take time to understand current practices
  • Let patterns emerge naturally from interviews

3. Ignoring Informal Processes

  • Don't just look at official documentation
  • Pay attention to how teams actually work
  • Note where formal and informal processes differ

4. Keeping Discussions Too Abstract

  • Ground conversations in real examples
  • Use actual AI features your organization is building
  • Focus on specific decisions teams need to make


5. Missing Cultural Context

  • Don't copy frameworks from other companies without understanding your culture
  • Consider your organisation's risk tolerance
  • Account for team dynamics and existing relationships

6. Forgetting to Build Trust

  • Don't treat this as just another compliance exercise
  • Invest time in relationship building
  • Show value early through active listening and understanding


Next Steps

Part 2 (arriving in 20 days) will help you turn these insights into action. You’ll do this through co-design workshops and pilot testing. But for now, focus on listening and learning.

Today's Action Item:

  1. Download our Stakeholder Mapping Template and Interview Guide (linked below).
  2. Pick one stakeholder and schedule your first discovery interview.

Remember: The goal isn't to have all the answers by day 30. The goal is to grasp the human side of your organisation's AI governance needs.

Riley


PS. If you need assistance in facilitating co-design workshop, reach out and I would be happy to discuss with you


Part 2 in Article Series

Part 3 in Article Series

Tools to help

Thought-Provoking Questions for Your Product

As you reflect on your own AI products or services, consider these questions:

  1. How would you explain your AI’s decision-making process to a regulator or a journalist?
  2. If a user asked to see what data influenced an AI decision about them, could you provide it?
  3. Have you tested if users can predict how their input will change your AI’s output?
  4. Does your UI design inadvertently hide important information about AI limitations and potential biases?
  5. How are you documenting your AI’s development for future audits or explanations?
  6. How do you review and update your AI’s transparency measures as the system evolves?

Remember, every line of code we write, every model we train, and every product we launch is an opportunity to set a new standard for transparent and trustworthy AI.

It’s not always easy, but it’s always worth it. So, let’s roll up our sleeves and get to work. The future of ethical, transparent AI is in our hands.

Brouwersgracht 1013HG Amsterdam Netherlands
Unsubscribe · Preferences