The Wisdom Gap: The Best AI Experiences Deliberately Slow You Down
When we design AI interactions that feel seamless and frictionless, we're actually designing out the most valuable part of the partnership: human wisdom.
The most successful AI system you've probably never heard of sits quietly in hospitals across the world, saving lives by doing something our industry would consider heretical: it deliberately slows doctors down.
Epic Systems' sepsis detection model doesn't automatically trigger interventions when risk scores elevate. Instead, it alerts nurses, who must then call physicians to discuss what the AI has observed. This "inefficient" process - this moment of human pause - has contributed to measurable reductions in mortality rates whilst maintaining the kind of clinical accountability that comes only from human judgment.
Sepsis AI Detection Model
The AI can spot patterns in vital signs and lab results that human eyes might miss. But the physician understands that this particular patient mentioned their daughter's wedding next week, that they have a history of presenting symptoms differently than textbooks suggest, or that they've been unusually anxious about housing insecurity.
This wisdom; accumulated through years of watching how human beings actually behave, suffer, and heal, cannot be captured in any dataset.
What Epic understood, and what most of our industry is still missing, is that we're not designing for efficiency anymore. We're designing for partnership between two fundamentally different forms of intelligence.
And genuine partnership requires something our field has been trained to eliminate: moments where human wisdom can influence the outcome.
The Collaboration Revolution
For decades, we've designed interfaces assuming a simple relationship: one intelligent system (the human) using a tool. Users had clear intentions, interfaces provided predictable paths, and systems delivered predetermined responses. We optimised for speed, clarity, and seamless task completion.
But AI changes everything. We're now orchestrating collaboration between two intelligent systems - human and artificial - each with different strengths, different ways of reasoning, and different forms of understanding. Each learning and adapting through every interaction.
Two intelligent systems collaborating
The human brings irreplaceable gifts: contextual wisdom, emotional intelligence, ethical reasoning, and the deep pattern recognition that comes from lived experience. The AI brings computational power, vast memory, and the ability to spot patterns across enormous datasets without fatigue.
When we design these interactions to feel "seamless," we're actually designing out the moments where these different forms of intelligence can inform each other. We're optimising for efficiency whilst eliminating wisdom.
The Trust Paradox
The data reveals our current approach isn't working. The latest research from KPMG and the University of Melbourne, surveying over 48,000 people across 47 countries, shows that only 46% globally trust AI systems. In Australia, we rank among the lowest for AI trust - only 30% believe the benefits outweigh the risks.
Here's the paradox: 66% of those people are using AI regularly, but as familiarity increases, trust decreases. The more people interact with AI, the more sceptical they become. This isn't a technology problem - it's a design problem.
We've been designing AI interactions using the old playbook: make it fast, make it seamless, hide the complexity. But users don't need seamless - they need transparency. They don't need speed, they need understanding. They don't need to be automated, they need to be partnered with.
Reframing Friction as Invitation
The most sophisticated AI experiences aren't eliminating friction, they're designing it strategically. These aren't obstacles to efficiency; they're invitations to wisdom.
Consider Apple Intelligence, which requires users to actively enable AI features through settings rather than having them activated by default. This initial friction establishes something crucial: agency. When someone consciously chooses to enable a feature, they're not just accepting it - they're entering into a partnership.
Apple's setup flow then provides granular controls and transparency dashboards where users can view detailed logs of any processing requests. The genius isn't in the control mechanisms themselves; it's in how the friction feels empowering rather than obstructive. Users aren't being asked to accept terms; they're being invited to configure a collaboration.
Apple Intelligence Set up Flow (image thanks to 9to5mac)
Four Invitations to Wisdom
The most effective AI systems create four distinct types of invitation - moments where human understanding can shape outcomes:
1. Invitations to Expertise
These are pause points that specifically request human knowledge to improve AI decisions. Rather than AI making recommendations in isolation, the system explicitly asks for human insight.
"I'm 92% confident about this financial projection, but I'm working from historical data alone. What market factors from your experience should influence this forecast?"
The AI isn't just accepting human input; it's actively seeking the kind of contextual understanding that only comes from professional experience.
2. Invitations to Proportional Care
Higher-consequence decisions receive increased thoughtful process, whilst routine choices flow smoothly. The system demonstrates that it understands when stakes matter.
JPMorgan Chase's commercial loan processing demonstrates this beautifully. Their AI processes agreements 360,000 hours faster than humans annually, but requires human legal experts to review all flagged contract clauses before implementation. The system creates automatic approval pathways for routine decisions whilst ensuring human wisdom guides complex ones.
This isn't about adding bureaucracy, it's about the system showing proportional respect for decision importance.
3. Invitations to Accountability
These processes create space for reflection whilst building defensible decision trails. They help users think through their reasoning whilst documenting choices for future learning.
"You chose a different approach than I recommended. What factors influenced your decision?"
The question serves multiple purposes: it respects human judgment, captures institutional knowledge, and creates opportunities for the AI to learn from human wisdom about factors it might not have considered.
4. Invitations to Institutional Memory
Moments that help organisations learn from individual decisions, turning single interactions into systematic wisdom.
These aren't just feedback loops; they're recognition that the most valuable insights often emerge from the intersection of AI pattern recognition and human contextual understanding.
The Temporal Dimension of Respect
Sometimes the most respectful friction is invisible. ChatGPT's typing indicators and progressive text revelation create deliberate delays that transform what could feel like mechanical exchange into something resembling thoughtful conversation.
User research indicates this temporal friction encourages more thoughtful engagement with AI responses. The gradual appearance gives users time to process information incrementally rather than being overwhelmed by walls of instant text.
But what if we pushed further? What if the thinking time weren't just simulated, but genuine - the AI actually using those moments to consider multiple approaches, to genuinely reflect? The delay wouldn't be theatre; it would be real cognitive work made visible, demonstrating the AI's own investment in getting the decision right.
Beyond Explanation: Understanding
Duolingo Max shows how explanation friction can transform automated feedback into genuine learning opportunities. After each exercise, users can request AI powered explanations tailored to their specific mistake or success.
Duolingo Max
This addresses a fundamental challenge in AI-assisted learning: the tendency to progress without understanding. By creating moments where learners pause and engage with reasoning, Duolingo reports increased comprehension and retention.
"Here's why I suggested this approach, but I'm curious, what made you consider the alternative you chose?"
The best explanation friction doesn't just show AI reasoning; it creates dialogue between different forms of intelligence.
Designing Invitations, Not Obstacles
The difference between meaningful and annoying friction follows clear principles:
Meaningful invitations:
Leverage uniquely human capabilities - judgment, creativity, cultural context, emotional intelligence
Arrive at natural transition points, respecting flow states
Offer clear value that users immediately understand
Feel culturally appropriate for involvement preferences
Allow calibration based on user experience and trust development
Obstacle patterns to avoid:
Arbitrary confirmations that don't prevent actual problems
Requesting information the system already has
Interrupting during deep focus
One-size-fits-all friction regardless of user expertise
Safety theatre that adds process without adding protection
The Implementation Challenge
For design leaders wrestling with AI adoption, this represents both opportunity and transformation requirements.
Your teams need new capabilities:
Understanding partnership dynamics rather than just user flows
Designing for trust development over time, not just task completion
Creating transparency mechanisms that feel empowering, not exposing
Balancing efficiency with wisdom across different decision stakes
Measuring relationship quality alongside traditional metrics
What This Means for Your Work
Start with one high-anxiety moment in your current AI experience. Where do users feel most uncertain about AI decisions?
Then ask:
What human wisdom could improve this outcome?
How can we invite that wisdom without feeling obstructive?
What would make users feel respected as partners in this decision?
How do we demonstrate proportional care based on decision stakes?
Remember: you're not starting from neutral trust. 70% of Australians DON't trust AI. With scepticism increasing alongside familiarity, every interaction is an opportunity to demonstrate that your AI system understands its own limitations and values human partnership.
The Future of Interface Design
What we're really discussing is a new form of honesty in interface design. For too long, we've created interfaces that pretend to know more than they do, that hide limitations behind the illusion of seamless automation.
This approach suggests something more profound:
interfaces that are genuinely transparent about their capabilities, that invite partnership rather than demanding faith. This isn't just better design - it's more ethical, respectful design. It treats users as intelligent partners rather than passive consumers.
The question isn't whether you can afford to slow down; it's whether you can afford not to build genuine partnership. Because in a world where trust in AI is declining even as usage increases, the organisations that learn to design for wisdom, not just efficiency, will build the relationships that endure.
The most revolutionary interfaces have always felt slightly uncomfortable at first. Perhaps interfaces designed for human-AI collaboration should feel deliberately unfamiliar - not to confuse, but because familiar patterns might prevent the kind of thinking these new relationships require.
What's one moment in your AI experience where you could test inviting human wisdom this week?