I Accidentally Leaked Company Data to AI – Here's the 90-Day Plan That Saved My Career


VIEW IN BROWSER

Riley Coleman

March 2025

Hand on the wheel

In this issue

Urgency for change


4 front-page AI f*ck ups


2 leading case-studies for doing it right way

___________________________

4 Simple Tasks to do in Q2 to ensure you arent the next frontpage f*&^ up

My AI Wake-Up Call

In early 2024, after being made redundant from Epidemic Sound, I used my gardening leave to enroll in the Ethics of AI course at the London School of Economics. What I discovered left me terrified.

As a self-confessed tech geek and early adopter, I’d eagerly embraced AI. For seven years, I’d specifically requested to work on AI projects. When ChatGPT launched, I thought, “This changes everything!” and dove in headfirst, encouraging my design ops and research teams to start experimenting with it.

Then came the moment of truth. After that ethics course, I realized I had breached numerous privacy boundaries: - I’d shared de-identified user interview transcripts to compare AI-generated insights with our researcher findings - I’d exposed company IP in the documents I shared with chat prompts - I’d compromised my own privacy by sharing financial information and legal contacts

What used to require deliberate corporate espionage now only requires naive enthusiasm.

But my personal mistakes pale in comparison to the larger issue: Every product-tech company worldwide is rushing to integrate AI into their products and services. Left unchecked, these systems WILL be biased, unfair, and diminish people's individual rights.

And we have just 6-12 months to get this right.

Self-Assessment
  • Have you shared company data with public AI tools?
  • Does your team have clear guidelines for ethical AI use?
  • Are you considering potential biases in your AI implementations?
  • Do your users know when they're interacting with AI?

We're Running Out of Time

There's one critical difference in how AI works compared to traditional systems: They are learning systems.

As Former Chief Decision Scientist at Google, Cassie Kozyrkov, explains:

"Trying to remove training data once it has been baked into a large language model is like trying to remove sugar once it has been baked into a cake. You'll have to trash the cake and start over."

Just like how parents teach children right from wrong early on, the ethical code initially taught to AI forms the foundation upon which all other rules are built. Once bias is trained into a system and reinforced, there is no untraining it.

This creates unprecedented urgency.

We must move from our focus on User Experience Design to Trust-Building Design – and fast.

The evidence is alarming.

Forbes recently reported that global consumer trust in AI has plummeted from 61% in 2023 to 53% in 2024. In the United States, it's even worse—just 35% of consumers trust AI today.

We are collectively failing to design trustworthy AI.

The Human Cost of AI Failures

Three devastating AI failures reveal what happens when proper safeguards aren't in place:

Amazon's Biased Recruiting Tool (2018)

Amazon abandoned an AI recruiting tool after discovering it systematically discriminated against women. The algorithm had been trained on resumes from the previous decade—when tech hiring was predominantly male—and learned to penalise applications from women.

Engineers tried removing gender identifiers, but the AI simply found proxy indicators like women's sports teams or certain colleges to continue discriminating.

Years of investment wasted because biases established at the beginning could not be removed.

Australia's Robodebt Scandal

This automated welfare debt recovery system wrongfully issued hundreds of thousands of debt notices to vulnerable citizens. The human toll was devastating—widespread financial hardship and, tragically, several people were driven to suicide as a direct result of the program.

A royal commission later found that up to 70% of the debt notices were false positives. The system had ruined lives before anyone realized the extent of its failures.

UK's Exam Algorithm Discrimination

When exams were cancelled during the pandemic, the UK government used an algorithm to predict students' grades based partly on their schools' historical performance.

The result?

Students from disadvantaged communities and state schools saw their futures stolen overnight as they were systematically downgraded, while those from elite schools received inflated marks.

The outcry led to nationwide protests and an eventual policy reversal—but not before causing immense stress and damaged opportunities for thousands of young people.

The Missing Safeguards

In all these cases, four critical safeguards were absent:

1. Discovery: Stakeholder Mapping & Inclusive Data

No one asked:

  • Who might be harmed by this system?
  • Are any groups over or under-represented in our training data?
  • Could our approach cause specific groups to be unfairly treated?
  • What historical biases exist in our data?

2. Transparency Components

Users couldn’t understand:

  • When AI was making decisions
  • What data those decisions were based on
  • How confident the system was in its recommendations
  • Why specific outcomes were generated

3. Comprehensive Outcome Testing

Teams didn’t verify:

  • Whether the AI worked equally well for all demographics If errors were concentrated in vulnerable populations
  • How well users understood the AI’s capabilities and limitations

4. Feedback & Appeal Pathways

People had no way to:

  • Challenge incorrect decisions
  • Access human review when needed
  • Provide feedback when the system made mistakes
  • Control how their data was used

These aren’t abstract, expensive concepts—they’re concrete operational changes that design teams can implement in the next three months.

Your Hands Are on the Wheel

You didn't ask for this responsibility. You didn't sign up to be the ethical guardians of the most powerful technology in history.

But here we are.

The processes, frameworks, and systems you create will determine whether AI becomes a force for human flourishing or a tool that amplifies our biases and strips away individual rights.

You are one of the few globally who have their hands on the wheel. The question is whether you will steer with intention or allow this transformation to unfold by default.

Success Stories: When We Get It Right

When these safeguards are in place, we see radically different results:

Denmark's Signaal Benefits System

Built after learning from welfare algorithm controversies, Signaal was designed with equality at its core:

  • Discovery: Began with extensive citizen consultations across diverse demographics
  • Transparency: Clear indicators when AI is being used and what data it accesses
  • Testing: Rigorous protocols with diverse users to ensure fairness and understanding
  • Feedback: Multiple channels for corrections and human review

Finland's Aurora AI Assistant

This system helps citizens navigate government services with transparency and control:

  • Transparency: Users always know when they're interacting with AI vs. humans
  • Transparency: The interface explains how recommendations are generated
  • Testing: Regular fairness audits ensure the system works equally for all groups
  • Feedback: Citizens maintain control over their data and can easily access human assistance

The STOP Framework:
Take Action by end of Q2

I've developed a framework that any design or product operations team can implement within the next three months—before it's too late:

S - Stakeholder Map (15-minute exercise)

Create a stakeholder mapping exercise your teams can do in Discovery or Kickoff Meetings.

Ask your teams to identify:

  • Who uses the system directly?
  • Who's affected by the outputs?
  • Who might be impacted without direct interaction?
  • Which groups are most vulnerable to errors?

T - Transparency by Design

Create standardized components that:

  • Clearly indicate when AI is being used
  • Show confidence levels in understandable ways
  • Explain key factors influencing decisions
  • Provide contextual control options

Then test with users: Can they explain back to you how the AI makes decisions and how they can give feedback?

If everyone can't explain how the AI works accurately - go back to the drawing board as your transparency isn't effective enough.

O - Outcome Fairness Testing

Create a series of real world test scenarios & user personas - include edge cases for both.

Test each scenario and only change the user demographics to a different persona - see how the AI's decision making and recommendations change:

  • Does it work equally well for all groups?
  • Are errors distributed evenly?
  • Include edge cases to reveal hidden biases

P - Pathways for Feedback & Appeal

Work with product & customer success teams to design clear processes for challenging AI decisions:

  • Contextual feedback options
  • Clear pathways to human review
  • Mechanisms for users to control their data

Overcoming Common Barriers

"We don't have time": These safeguards prevent expensive failures and rebuilds later

"Leadership won't prioritize this": Frame it as risk management and trust-building

"Our team lacks expertise": Start small with one component of the framework

"We're moving too fast": Faster doesn't mean better if you're heading toward a cliff

Your Monday Morning Action Plan

The design decisions you make in Q2 2025 will establish patterns that become deeply embedded in AI systems for generations. Here's exactly what to do:

  1. Book a 60-minute meeting with your design team for next week
  2. Use this agenda:
    • Share the STOP framework and why it matters
    • Identify one AI feature to apply it to
    • Draft how you'll implement each component
    • Assign owners and set follow-up dates
  3. Add these safeguards to your Q2 roadmap:
    • Create a Stakeholder mapping exercise
    • Create 4 transparency components & conduct a Transparency test with users
    • Plan Outcome Fairness Tests on your current AI - use it as a baseline
    • Design a feedback mechanism into the product as well as process for Customer Success teams to follow.

Join Our Community of Ethical AI Champions

I'm currently an online community of professionals who care deeply about these issues and want a place to continue learning and sharing emerging best practices and lessons learned. (see below for details for 2 special bonuses to newsletter subscribers only)

Together, we can create a future where AI earns and deserves human trust—where even my mum (very non technical) could open any digital system and immediately understand:

  • When AI is in use
  • What information it's using about her
  • Why it's making specific recommendations
  • How to get help if something goes wrong

What if AI systems had intuitive labels like care tags on clothing?

Simple, internationally recognized symbols allowing everyone to make informed choices about AI systems they encounter.

What if we took this unprecedented moment in history—when we are literally architecting the fabric of our future society—to teach AI a better, fairer way?

Your hands are on the wheel.

The question is will you steer with intent or let it unfold by default - a default that will result in AI that does discriminate and make decisions that we can't understand or control.

Community Trustworthy AI Professionals


Designing Trustworthy AI Practices for
Design & Product

Lead with Trustworthy AI :
Design, Innovate & Inspire

This 8-week cohort-based course teaches you how to integrate AI into your workflow without losing the human touch and how to operationalise Trustworthy Human-Centred AI into your work practices.

By the end, you’ll have the skills, strategy, and confidence to:

✅ Rapidly upskilled in using AI across your workflows - increasing your efficiency and confidence

✅ Launch AI features that drive engagement and build long-term trust.

✅ Lead cross-functional AI strategy with clarity and confidence.

✅ Secure executive buy-in for Trustworthy AI adoption.

✅ Future-proof your career with AI expertise that matters.

MARCH MADNESS OFFER - Save $765

As my newsletter subscribers only

  • 30% Discount on Course +
  • Free 1:1 Coaching Call you can use anytime you need it.


All this for only $665
(payment plan possible)

Risk-Free - 30 day money back guaranteed!


Cohort closing soon - 31st March

https://maven.com/ai-flywheel/human-centred-ai?promoCode=Newsletter

Reach out to me directly if you have any questions - Riley@ai-flywheel.com

Join a community of forward thinking practitioners like you in this cohort based course that enables you to be a
next-gen leader we need in Design & Product

Brouwersgracht 1013HG Amsterdam
Unsubscribe · Preferences