Building AI Governance : Co-Design Workshop design (Part 2 )
In this issue
Workshop Prep
Workshop Approach
G’day! Apologies for sending this out late this week. But lets dive in ay! When the customer insights team at Financial Services Corp wanted to implement sentiment analysis on customer calls, they didn't reach for a dusty governance policy. Instead, they pulled up a simple decision map that showed exactly what ethical considerations to address at each stage of development—a map they helped create just weeks earlier. This isn't a fantasy—it's what happens when you build AI governance around real human workflows instead of abstract policies.
In Part 1, we mapped your AI ecosystem. Now, in Part 2, we're turning those insights into practical tools through co-design workshops. Those stakeholder maps tell you exactly who needs to be in the room. Those pain points from your interviews become the scenarios your teams will solve for. Those existing materials provide the starting points for new, more usable governance tools.
The next 60 days are about transformation—turning abstract principles into practical support that teams actually want to use. Let's dive into exactly how that happens.
Setting the Stage: The Critical Pre-Workshop Phase
Two weeks before your workshop, reach out to each participant with a clear message: "This isn't just another governance exercise—it's about creating tools that will make your work with AI both easier and more responsible."
For the workshop itself, each participant needs context on why they specifically were chosen, and a clear understanding of the outcomes—practical tools they'll help create and eventually use.
The anonymous pre-workshop survey creates immediate psychological safety— every single person has a different ethical code, created through a lifetime of experiences; while there is significant overlap, experiences from their upbringing, cultural practices, schooling and more all develop differences. Everyone sees that their colleagues share different perspectives as well as similar questions and concerns.
The Co-Design Workshop: Where Governance Gets Real
The moment I knew our AI governance approach was working? When a developer was overheard saying, "Finally, something that actually helps us make decisions instead of just telling us what we can't do."
That's the power of well-structured co-design—it transforms abstract policies into practical tools by having the right people create solutions they'll actually use.
Selecting and Mapping Real-World Scenarios
The success of your co-design workshop hinges on having relevant, detailed scenarios that participants can meaningfully engage with. Here's how to develop scenarios that drive productive collaboration:
Finding the Right Scenarios
1. Mine Your Near-Term Roadmap Review your product or AI roadmap for the next 3-6 months. Look for features where AI plays a significant role in data analysis, decision-making, or customer interaction. Even if the technical details aren't fully specified, the business purpose and general approach should be clear enough to build a scenario around.
2. Adapt Existing Systems Already have AI systems in production? Create a "digital twin" scenario by adapting an existing system to a new context. This combines the advantage of technical accuracy with the fresh perspective of applying it to a different domain.
3. Start with Business Problems If your AI roadmap isn't detailed enough, start with business problems teams are trying to solve. Frame them as "How might we..." questions, then sketch hypothetical AI solutions based on common patterns. For instance: "How might we reduce customer churn through early intervention?" could lead to a predictive analytics scenario.
Pre-Mapping Approaches
Once you've selected your scenarios, prepare pre-mapped technical flows before the workshop. This typically requires:
A Discovery Session (60-90 minutes) with 1-2 technical team members who understand the system or problem space
Documentation Review of any existing specifications, diagrams, or vendor materials
Simplification to focus on the most important data flows, processing steps, and decision points
Your pre-mapped scenario should include:
Data Inputs: What information enters the system and from where
Processing Steps: How the AI transforms or analyses the data
Decision Points: Where algorithmic decisions or predictions happen
Outputs: What results the system produces and who uses them
For scenarios based on future systems, don't worry about getting every technical detail perfect. The goal is to create a realistic enough foundation that participants can engage meaningfully with the governance challenges.
If you're struggling to develop detailed scenarios, I could create a scenario generation tool that I can share in the Part 3 article.
Would you like a tool to create scenarios & system map?
Remember, the pre-mapping isn't meant to be definitive—it's a starting point that saves valuable workshop time and shifts the focus to responsible implementation rather than technical debates.
Navigating Real-Time Workshop Challenges
During a recent workshop with a financial services firm, we hit a roadblock that nearly derailed our progress. The legal team insisted on comprehensive documentation for every AI decision, while the product team pushed back, arguing this would kill their agile development process.
"This is exactly why governance never works," muttered one developer, arms crossed.
Rather than letting this tension fester, we pivoted on the spot. "Let's map this disagreement directly onto our system flow," I suggested. We added a new layer to our map labeled "Documentation Requirements," with three categories:
"Must Document" (high-risk decision points requiring full records)
"Should Summarise" (medium-risk areas needing brief documentation)
"Can Skip" (low-risk elements requiring no special records)
The teams negotiated each decision point on the map, and within fifteen minutes, they'd created a graduated documentation approach. But we didn't stop there.
"What if we built a documentation assistant that works with your existing tools?" I asked. Together, the group quickly sketched a concept for an AI tool that would integrate with their Google Enterprise suite. Product managers or Developers could input key decisions in short form, answer a few targeted questions, and the tool would automatically generate comprehensive documentation that met legal standards.
"I can actually work with this," the developer acknowledged, uncrossing his arms. The legal counsel nodded in agreement.
What had started as a workshop-threatening impasse became one of the company's most widely adopted governance tools.
Layer by Layer: Building Practical Governance
Instead of tackling every aspect of governance at once, your workshop builds understanding methodically. This structured approach ensures teams don't get lost in complexity or endless discussions.
Layer 1: Understanding the AI System (25 minutes)
Facilitator introduction: "This first map shows how data flows through your AI system. Let's spend a few minutes understanding this foundation before we layer on human interactions and ethical considerations."
Each group receives a pre-mapped technical flow diagram of their AI scenario showing:
Data inputs (what information enters the system)
Processing steps (how the AI transforms and analyses the data)
Decision points (where the system makes judgments or predictions)
Outputs and actions (what results the system produces)
Teams spend 15-20 minutes reviewing the map, asking clarifying questions, and making minor refinements if needed. By providing this technical foundation upfront, you've already saved 60-90 minutes of workshop time that would typically be spent debating system functionality.
Key output: A validated understanding of how the AI system actually works, creating a shared foundation for the ethical work ahead.
Layer 2: Mapping Human Touchpoints (30 minutes)
Facilitator introduction: "Now let's identify where humans interact with this system. This is critical because governance isn't just about algorithms—it's about how people use, oversee, and are affected by AI."
Using the technical flow as a foundation, teams map:
Who: Which stakeholders interact with the system at each stage (developers, users, managers, subjects of the AI)
How: What decisions or actions humans take at each point
Handoffs: Where responsibility transfers between teams or individuals
Oversight: Where human judgment overrides or validates AI decisions
This layer transforms a technical system into a socio-technical one, revealing the human dynamics that governance must address.
Key output: A clear map of human touchpoints showing where governance tools and guidelines will have the most impact.
Layer 3: Creating Ethical Swimlanes (90 minutes)
Facilitator introduction: "This is where governance becomes actionable. We'll create specific commitments and boundaries that translate abstract principles into practical guidance. For each aspect, we'll craft clear 'We will' statements that define positive actions we commit to taking, and 'We won't' boundaries that establish firm ethical guardrails for our work."
For each major stage in the AI system flow, teams work through four ethical dimensions using a structured template:
Fairness & Non-discrimination
Identify potential bias risks at this stage
Create 1-2 "We will" commitments to ensure fairness
Define at least 1 "We won't" boundary to prevent discrimination
Transparency & Explainability
Determine what needs to be transparent at this stage and to whom
Create 1-2 "We will" commitments for meaningful transparency
Define 1 "We won't" boundary regarding opacity or complexity
Privacy & Data Protection
Identify sensitive data or privacy concerns at this stage
Create 1-2 "We will" commitments to protect privacy
Define 1 "We won't" boundary regarding data usage or retention
Safety & Reliability
Identify potential risks or failure modes at this stage
For example, applying this to a sentiment analysis feature:
Fairness & Non-discrimination
"We will verify our sentiment analysis works equitably across different English dialects and accents"
"We won't deploy without testing across diverse demographic groups"
Transparency & Explainability
"We will provide customer service agents with clear explanations of how sentiment scores are calculated"
"We won't implement complex sentiment models without explainability mechanisms"
Creating Useful Decision-Making Support Tools (40 mins)
After creating these commitments, teams identify what tool would make each one practical to implement:
A testing checklist for diverse accent verification
A one-page explanation guide for agents
A dashboard showing sentiment analysis distribution across demographic groups
Key output: A comprehensive set of specific ethical commitments and boundaries, each paired with a practical tool concept.
Data Privacy Checklist
Bringing It All Together (40 minutes)
Facilitator introduction: "Now we'll translate our commitments into action by prioritising the tools that will have the biggest impact in making governance real."
Each group:
Reviews their complete set of commitments and tool concepts
Identifies the 3 highest-priority tools based on impact and feasibility
Creates a simple one-page brief for each priority tool including:
Purpose and user needs it addresses
Key features and format
Who will use it and when
Success criteria
Key output: Clearly prioritised tool briefs that can be immediately developed after the workshop.
From Workshop to Real-World Implementation
End your workshop with clear commitments. Have each group share their top three tool concepts and the specific problems these tools will solve. Before everyone leaves, establish who will synthesise the outputs, when the first tools will be ready for review, which teams will pilot them, and a date for the first feedback session.
Next week's article will dive deep into implementation strategies, common challenges you'll face, and how to maintain momentum to ensure your governance tools become an integral part of your teams' workflows. We'll also explore what success should look like at the 60-day mark and how to prepare for scaling your approach.
Today's Action Item: Schedule your workshop and start identifying the real AI scenarios your teams are working on.
Next up: Part 3 - From Co-Design to Implementation: Making Governance Stick (coming next week)