Beyond Data: Your AI Can't Read The Hidden Language of Human Decision-Making


VIEW IN BROWSER

Riley Coleman

November 2024

Beyond Data: Your AI Can't Read
The Hidden Language of
Human Decision-Making

In this issue

Human Decision Making


AI Ignorance of Full Picture


Practical Checklist for Determining AI or Human Decision Required


Announcement

Last Tuesday, our AI-powered performance management system flagged an employee for a performance review. The data was clear and quantifiable: missed deadlines, late arrivals, decreased output. By every quantifiable measure, it was the right call.

But as I sat across from Sarah, I found myself processing a rich tapestry of information that no AI could access. I was doing what humans do naturally. I was time travelling through data, emotions, and relationships all at once.

The signs were visible. Dark circles under her eyes. A slight tremor in her hands. Her usually immaculate desk was now scattered with post-it notes.

The emotional landscape: the way her voice caught when mentioning home, her apologetic body language, the absence of her usual demeanour.

The relationship context: six years of consistently good performance, her role as an informal mentor to junior staff, her tendency to take on others' workloads without complaint.

Through gentle conversation, the full story emerged - her mother's recent dementia diagnosis. This moment crystallised something crucial about the gap between AI and human decision-making.

The Human Time Machine

When humans make decisions, we're not just processing present data - we're time travellers. Sarah's six years of excellence weren't just data. They were lived experiences that informed my view of her current situation. Every interaction, every project successfully delivered, every team member mentored.

They weren't just log entries. They were threads in a tapestry of trust and relationships.


This temporal intelligence allows us to:

  • Draw on emotional archives of past experiences
  • Read subtle present-moment signals
  • Anticipate future implications for relationships and team dynamics
  • Integrate learning from every interaction into our decision-making framework

The Two Systems at Play


Nobel laureate Daniel Kahneman says humans decide on two levels. System 1 is fast and intuitive. System 2 is slow and analytical. Both were active as I spoke with Sarah:

System 1 instantly sensed something was "off." It noticed subtle changes in her demeanour. It processed non-verbal cues. It compared her current behaviour to past patterns.

System 2 carefully evaluated the performance metrics, recalled her track record, and considered possible underlying causes.

This dual-processing system lets humans merge data with intuition, facts with feelings, and metrics with meaning. But there's a third element at play - one that's nearly invisible yet crucial: our ethical framework.

The AI Approach: Brilliant But Blind

AI's capabilities are staggering. Modern systems can process more information in a minute than a human could in a lifetime. Think of it as having "mathematical intelligence." It's the ability to find patterns and correlations that humans might miss. This raw processing power is genuinely revolutionary.


But here's where we face three fundamental challenges:

1. The Ethics Blindspot

Unlike data points that can be quantified and processed, ethical considerations often exist in the unspoken spaces between decisions. They live in the cultural nuances, personal values, and shared human experiences that shape our choices.


2. The Context Conundrum

AI processes information within strict parameters but struggles with the fluid, contextual understanding that humans take for granted. It also relies on the right data being collected. This must be done without creating a surveillance culture. And, there must be a clear definition of "good performance."


3. Relationship Amnesia

AI can track interaction patterns. But, it lacks understanding of relationship currencies. These include: trust built over time, social capital earned through support, and unspoken agreements that form the basis of professional relationships.

The Growing Gap: The Wisdom Paradox

The gap between AI's skills and human wisdom isn't just theoretical. It's creating what I call the 'wisdom paradox.' The more we rely on AI for decisions, the less we use our own decision-making skills. It's like a muscle that weakens from disuse.

Each time we default to automated decision-making over human judgement, we're not just making a single choice - we're creating a precedent.

More concerningly, we're losing opportunities to develop and maintain our ethical decision-making skills.


The ripple effects extend beyond individual decisions:

  • Teams interpersonal relationships becoming more fractured

  • Managers losing confidence in their intuitive judgement
  • Organisations losing their collective wisdom

  • Ethical decision-making skills atrophying from disuse

The Bridge We Need to Build

The solution isn't to abandon AI - its capabilities are too valuable. Instead, we must shift our focus. We need to move from "data-driven decision making" to "data-informed and ethical decision making."" This means:


Making the Invisible Visible

  • Explicitly discussing and documenting the ethical considerations in our decisions

  • Creating frameworks that help identify when human judgement should override automated recommendations

  • Training teams to articulate their ethical reasoning for their decision making


Preserving Human Wisdom

  • Maintaining spaces for human judgement in automated processes


  • Valuing and developing emotional intelligence alongside technical skills


  • Creating systems for data-informed human decision-making


Building Collective Ethical Literacy

  • Developing a shared vocabulary for discussing ethical considerations
  • Creating communities of practice around ethical decision-making
  • Establishing feedback loops that capture both quantitative and qualitative outcomes
  • Nurturing the collective wisdom that helps organisations navigate complex human situations

The Human Wisdom Check: When to Pause Your AI Tools

Before accepting an AI-generated recommendation or output, pause and consider these reflection points:


1. The Human Impact Test Ask yourself:

  • Will this decision significantly impact someone's life or wellbeing?
  • Could there be personal circumstances I know about that the AI doesn't?
  • Would a face-to-face conversation reveal important nuances?


🚩 If yes, bring human wisdom into the process.


2. The Context Layer Consider:

  • Are there recent events or changes that provide crucial context?
  • Is there historical or cultural context the AI might miss?
  • Are there unwritten rules or norms that apply here?


🚩 Rich context requires human interpretation.


3. The Relationship Web Reflect on:

  • Could this affect trust or relationships you've built?
  • Might this impact community or group dynamics?
  • Are there stakeholder relationships to consider?

🚩 Complex relationships need human understanding.


4. The Ethical Compass Question whether:

  • Multiple ethical principles are in conflict
  • There's a gap between what's legal and what's right
  • Different cultural values might lead to different conclusions

🚩 Ethical complexity demands human wisdom.


5. The Future Lens Think about:

  • What precedent might this set?
  • Could there be long-term implications not visible in the data?
  • Might this decision affect future choices or relationships?

🚩 Long-term impact needs human foresight.

Simple Decision Framework:

Multiple 🚩 = Human-led decision with AI as input

One 🚩 = Balanced AI-Human collaboration

No 🚩 = AI-led with human oversight


Remember: AI is a powerful tool, but human wisdom is what gives it meaning and direction. Use these reflection points not as rigid rules, but as prompts to engage your uniquely human perspective.

In the age of AI always pause and ask yourself "Just because we can, does it mean we should?"


The Path Forward

The AI system was right about Sarah. The metrics had declined. But it was blind to the human context that made those metrics meaningful. It couldn't time-travel through her history with the organisation. It couldn't feel the weight of her current struggles. Nor could it understand the long-term effects on team morale and trust.

As we rush to embrace AI's efficiency, we must ensure we're not building systems that are blind to these uniquely human elements.

The most sophisticated AI system in the world can't understand the full impact of a mother's dementia diagnosis on her daughter's performance. And that's precisely why we need to preserve and cultivate our human capabilities.

The time to act is now. Not just to develop AI literacy, but to develop ethical literacy alongside it. We can ensure that our AI-enhanced future remains fundamentally human-centered by making the invisible visible and by consciously exercising our ethical decision-making muscles.


Free Webinar - Next Wednesday

How to create your AI literacy flywheel

The secret to AI mastery?
It's all about the flywheel effect.

This isn't another "intro to AI" workshop. It's your blueprint for:

• Mastering AI fundamentals that matter
• Building AI into your daily workflow DNA
• Creating momentum that compounds daily
• Turning AI from a tool into your advantage

Stop starting over. Start building your flywheel.

Date

Wednesday 20th
November 2024


TIME (2 Sessions)

AEST Timezone
3pm - 4pm
(CET 6am - 7am)

CET & EST Timezone
CET 3pm - 4pm /
EST 9am - 10am


Location

Online - Linkedin Live


Level

Beginners & Beyond

Share this issue with your friends

Sponsor this newsletter

Brouwersgracht 1013HG Amsterdam
Unsubscribe · Preferences