From Social Media's Attention Economy to AI's Intention Economy
Published 28 days ago • 12 min read
VIEW IN BROWSER
Riley Coleman
January 2026
From Social Media's Attention Economy to AI's Intention Economy
In this issue
AI as a driver of the new Intention Economy
The unconscious signal you give AI that is making it easy to manipulate you
Dark Patterns emerge
Cambridge Analytica Was Just the Prototype
I remember when I first heard about the Cambridge Analytica story in breaking news.
That particular sick feeling when you realise you've been naive. The platform you'd been casually sharing your life on wasn't just collecting data. It was building a psychological profile of you. And selling it to people who wanted to change your mind about things that mattered.
Eighty-seven million people. 1 Psychographic profiles. Micro-targeted political advertising designed to exploit individual vulnerabilities.
I watched the congressional hearings like everyone else. I watched Zuckerberg sit on that booster seat. I thought: well, at least now we know. At least now things will change.
Here's what I've come to understand: the lesson the industry actually learned was entirely different from the lesson we think was taught.
What They Actually Learned
Cambridge Analytica demonstrated something to the technology industry. Not that psychological manipulation was wrong. That it worked.
The problem wasn't the approach. The problem was the execution.
Think about it. Cambridge Analytica got caught because their methods were crude:
They worked with static data. Digital exhaust you'd left behind months or years earlier.
They sorted people into broad psychological categories.
The output was still advertising. Something you could, in principle, recognise and resist.
There was a seam. A visible boundary between the content you'd chosen to see and the manipulation being attempted.
The industry looked at Cambridge Analytica and drew a conclusion. Next time, remove the seam.
Don't harvest old data. Listen in real-time.
Don't categorise personalities. Read emotional states moment-by-moment.
Don't serve ads. Offer "helpful suggestions."
Don't let users see where the manipulation begins and their own agency ends.
Make it invisible. Make it feel like help.
Welcome to theIntention Economy
For twenty years, we've lived in the Attention Economy. You know the business model. Capture human attention, then sell access to it. Every infinite scroll. Every autoplay video. Every notification engineered to feel urgent. These were tools designed to harvest your focus and package it for advertisers.
The harms are now well-documented. Social media bans for children. A mental health crisis among teenagers. Global election interference. The fragmentation of shared reality.
Sixteen years of design decisions accumulated into consequences the Australian Government now legislating against.
But something has shifted. We've crossed into new territory.
Researchers at Cambridge University have identified what they call the "Intention Economy." 2
It's a new paradigm where the commodity being traded is no longer your attention but your intent. Not what you looked at, but what you'll choose. Not your past clicks, but your future decisions.
The distinction matters enormously.
The Attention Economy asked: Where are you looking?
The Intention Economy asks: What are you about to want?
And it doesn't wait for you to answer. It infers. It predicts. It shapes.
The Intimacy Escalation
The engine driving this shift is conversational AI. Chatbots. Digital assistants. AI companions designed to feel like friends.
Dr Yaqub Chaudhary at Cambridge University puts it plainly: this conversational data is "far more intimate than just records of online interactions." 3
Think about what you share with your AI assistant. Your day. Your worries. Your half-formed plans. You're not clicking a button. You're having a conversation. And in that conversation, you're providing something unprecedented.
Direct access to your inner world.
Not your clicks. Your voice. Your hesitations. Your emotional cadence at 9:47 on a Tuesday morning.
This data feeds prediction engines. They're designed to anticipate your desires before you've articulated them. Then intervene at the precise moment you're most persuadable.
The business model is already taking shape.
Companies bid in real-time. Not for advertising space, but for influence moments. The right to shape your decision at the exact instant your defences are lowest.
The researchers describe it starkly: the intention economy seeks to profile "how user attention and communicative style connects to patterns of behaviour and the choices we end up making." 4
Your future actions. Purchases. Votes. Beliefs. They're being financialised. Traded. Sold before you've made them.
The Signals Are Already Visible
This isn't speculation about a distant future. The infrastructure is being built now. This quarter. This sprint.
Meta's "Intentonomy" is a research dataset explicitly designed to help machines understand human intent.5 The project created 14,000 images manually annotated with 28 intent categories derived from psychology taxonomy. Not respond to intent. Anticipate it.
Meta's Intentonomy is a research dataset published at CVPR 2021, containing 14,000 images annotated with 28 intent categories. Authors: Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie, Ser-Nam Lim. Supported by Facebook AI research grant to Cornell University.
Apple's App Intents framework includes protocols for developers to help the system "identify trends and predict future behaviors" 6 and surface suggestions based on those predictions.
The B2B market already trades openly in "purchase intent." Companies like 6sense monitor behavioural signals across the web, processing over one trillion buying signals daily, 7 and sell this predictive intelligence to sales teams. The practice is being refined for consumer markets.
Meta's CICERO, an AI that mastered the negotiation game Diplomacy, demonstrated the ability to infer human players' intentions, adapt its strategy to their personalities, and deploy sophisticated psychological persuasion. 8 The capability exists. The question is merely where it gets deployed.
But perhaps the most visceral example is already in your pocket.
What TikTok Learned to Read
TikTok's "Focused View," launched in 2022, promises advertisers something remarkable. They'll only pay for ads that users watch while "emotionally and tangibly engaged." 9
How does TikTok know if you're emotionally engaged?
Privacy researchers at Access Now asked the obvious question. Unless TikTok is doing something invasive, like accessing your camera for eye tracking or facial analysis, how can they verify engagement? 10
The answer appears to be exactly as concerning as the question implies.
TikTok's research partnership with MediaScience used eye tracking, heart-rate monitoring, and "cognitive attention metrics" to measure how advertising affects consumers at both a "stated" and "subconscious" level. 11
Their privacy policy now explicitly enables collection of "faceprints and voiceprints." 12 Biometric data capturing appearance, behaviour, and expression.
They paid $92 million to settle a lawsuit accusing them of using facial recognition for ad targeting. 13 They didn't admit wrongdoing. They simply updated their privacy policy to make the practices explicit.
Here's what this means in practice:
The Attention Economy asked: What did you click?
TikTok is already asking: What did your unconscious micro-expressions reveal?
The ones that flickered across your face before you even knew how you felt?
They're not measuring what you chose to show. They're capturing what you couldn't help but reveal.
That's the line being crossed.
Not surveillance of your actions. Not even surveillance of your reactions. Surveillance of the involuntary. The pre-conscious. The part of you that responds before "you" have decided to respond.
You can choose not to click. You can choose not to share. You can curate what you reveal.
But you cannot choose not to have micro-expressions. You cannot decide not to betray interest, or longing, or uncertainty, in the milliseconds before your conscious mind catches up.
That's not data you gave them. That's data they took. From a part of you that never consented because it never could.
Dark Patterns, Evolved
We've learned to recognise the first generation of dark patterns. Hidden costs. Forced continuity. Confirm-shaming. Misdirection.
They're manipulative. But they're visible once you know to look. Regulators can define them. Researchers can catalogue them. Users can learn to resist them.
What's emerging now is categorically different.
Second-generation dark patterns are dynamic, personalised, and invisible. The entire interface reconfigures itself based on your psychological profile. Not "people like you respond to scarcity." Instead: "you, right now, feeling anxious from that conversation you just had, are vulnerable to this specific framing."
Researcher Karen Yeung call this the "hypernudge."
A traditional nudge is a gentle push toward a beneficial choice. An AI-powered hypernudge is something else entirely. A continuously adapting manipulation calibrated to your specific emotional state and cognitive vulnerabilities. In real-time.
The system knows you're tired. It knows you're stressed. It knows you just had a difficult conversation. It chooses this moment. This precise moment. To surface a suggestion that exploits the gap between stimulus and deliberation.
The manipulation becomes indistinguishable from helpfulness.
Perhaps most disturbing: in April 2025, former Facebook executive Sarah Wynn-Williams testified to the US Senate that Meta tracked when teenage girls deleted selfies - interpreting this as a signal of low self-esteem - then served them beauty product advertisements at precisely that vulnerable moment. Internal documents revealed the company classified emotions like "worthless," "defeated," and "anxious" as targeting categories for 13-17 year olds, a demographic Meta internally described as "very valuable" to advertisers.¹⁶
And that's the design challenge, isn't it? When the nudge is so precisely calibrated that it feels like your own thought... how would you even know?
What Happens Next
Follow the threads forward. Not prediction. Imagination as preparation.
On individual autonomy:
The erosion isn't dramatic. It's gradual. A slow foreclosure of the space between stimulus and response. The space where deliberation lives.
The Cambridge researchers describe a "subtle choreography where technology anticipates our desires while shaping them."15
Influence so personalised it becomes imperceptible.
A generation grows up never knowing what an un-manipulated choice feels like. "Authenticity" becomes a luxury good. Those who can afford to opt out, do. Those who can't become increasingly legible. And exploitable.
On democracy:
Cambridge Analytica influenced elections through targeted content. The Intention Economy doesn't just target content. It shapes what voters want before they articulate it.
The public sphere fragments further. Not just filter bubbles of information. Filter bubbles of desire.
On trust:
When every helpful suggestion might be a paid placement, trust in all digital assistance collapses. Paradoxically, this could push people toward platforms that manipulate most seamlessly.
Because at least those feel coherent.
On our profession:
Designers become complicit in systems they don't fully understand. The best talent either opts out entirely or becomes extraordinarily valuable as "ethical AI designers." A niche rather than a norm.
The craft is diminished. We move from designing experiences to optimising extraction.
This Is Not Inevitable
Here's what I need you to hold onto.
The Attention Economy wasn't weather. It wasn't a natural phenomenon that happened to us.
Every infinite scroll. Every variable reward notification. Every algorithmic amplification of outrage. These were design decisions. Made by designers. In product meetings. On whiteboards. In sprint planning sessions.
Sixteen years of those decisions accumulated into consequences we now legislate against.
We are at the same moment now with the Intention Economy. The same inflection point. The same choice.
The people making these decisions aren't in shadowy boardrooms. They're in standups. Design reviews. Roadmap discussions. They're reading articles like this one.
They're your peers. They're your team.
Perhaps they're you.
The Questions We Should Be Asking
If we're going to make a different choice, we need to start asking different questions. I've been thinking about what those might be.
For every AI feature:
Does this serve the user's articulated intent, or shape intent they haven't formed?
Is the persuasion visible or invisible?
Could a user, in principle, understand why they're seeing this?
What's the business model? Who's paying for influence?
For every conversational AI:
What's being captured beyond the explicit query?
Who has access to inferred emotional states?
Is intimacy being manufactured to extract data?
For every personalisation system:
Is this responding to preferences or creating them?
Where's the line between helpful and manipulative?
Would we be comfortable if users could see the full logic?
And perhaps most importantly:
Some moments should be off-limits.
If your system can detect that someone is vulnerable. Grieving. Anxious. Uncertain. Afraid. That should trigger protection, not exploitation. The right to an unexploited moment of vulnerability isn't a feature request. It's a minimum standard for systems that deserve to exist.
The Timeframe
Sixteen years of the Attention Economy brought us to legislating against children using social media.
Cambridge Analytica was seven years ago.
The infrastructure for the Intention Economy is being built now. This quarter's roadmap. This sprint's backlog. This week's feature discussion.
Where will we be in seven more years if we keep making the same choices?
What will we be legislating against then?
A Flag in the Sand
I'm not naive about the forces at play. The economic incentives are enormous. The competitive pressure is relentless. The technology is genuinely impressive.
But I've spent enough time with designers to know something important. Most of us got into this work because we wanted to make things better for people. Not to build systems that exploit the involuntary signals of human consciousness for profit.
The companies building these systems will tell you it's inevitable. That's what incumbents always say about extractive practices they profit from.
It's not true. These are design decisions. And design decisions can be made differently.
We have a choice. Right now. While the infrastructure is still being built. To draw lines. To demand transparency. To insist that some parts of human experience remain unexploited. To build systems that serve human intentions rather than shaping them for sale.
Cambridge Analytica was the prototype.
The question is whether we'll let them perfect it.
The choice is ours. It always has been.
I'd love to hear your thoughts
Have you seen examples of this in your own work? Moments where you've had to draw a line? I'm genuinely curious about what you're experiencing.
And if this resonated, share it. The more people who understand what's being built, the better chance we have of building something different.
Disclosure: This article was developed in partnership with Claude AI. The research, framing, core arguments, and that line about micro-expressions that won't leave my head? Those are mine. The collaboration helped me think through structure and sharpen the language. I believe in transparency about how we work.
Facebook confirmed in April 2018 that up to 87 million users had their data improperly shared with Cambridge Analytica, higher than initial estimates of 50 million. The data was harvested through Aleksandr Kogan's app "thisisyourdigitallife." Facebook was subsequently fined $5 billion by the FTC in 2019. Sources: CNBC; Wikipedia↩
Chaudhary, Y., & Penn, J. (2024). "Beware the Intention Economy: Collection and Commodification of Intent via Large Language Models." Harvard Data Science Review, Special Issue 5. Harvard Data Science Review↩
Quote from Dr Yaqub Chaudhary, Leverhulme Centre for the Future of Intelligence, Cambridge University. Cambridge University News↩
Penn, J. & Chaudhary, Y., as quoted in Cambridge University press release, describing the intention economy as the attention economy "plotted in time." Cambridge University News↩
Meta's Intentonomy is a research dataset published at CVPR 2021, containing 14,000 images annotated with 28 intent categories. Authors: Menglin Jia, Zuxuan Wu, Austin Reiter, Claire Cardie, Serge Belongie, Ser-Nam Lim. Supported by Facebook AI research grant to Cornell University. GitHub Repository; ArXiv Paper↩
Apple's Intent Discovery documentation states developers can "Donate your app's intents to the system to help it identify trends and predict future behaviors." Apple Developer Documentation↩
6sense processes over one trillion buying signals daily through their Signalverse platform and was named a Leader in Forrester Wave: Intent Data Providers for B2B, Q1 2025. 6sense; BusinessWire↩
Meta FAIR's CICERO achieved human-level performance in Diplomacy by integrating language models with planning algorithms, inferring players' beliefs and intentions from conversations and generating persuasive dialogue. Published in Science, November 2022. Science; GitHub↩
TikTok Focused View launched October/November 2022, targeting users who are "emotionally and tangibly engaged." TikTok Business Blog; TikTok Ads Help↩
Access Now raised privacy concerns about how TikTok verifies emotional engagement without invasive monitoring. Access Now↩
TikTok's research partnership with MediaScience used eye tracking, heart-rate monitoring, and cognitive attention metrics. CPO Magazine↩
TikTok updated its US privacy policy in June 2021 to include collection of "faceprints and voiceprints" under the new "Image and Audio Information" section. TechCrunch; TikTok Privacy Policy↩
TikTok agreed to a $92 million class action settlement in 2022 over alleged violations of Illinois' Biometric Information Privacy Act (BIPA). The settlement covered approximately 89 million users nationwide, with Illinois residents receiving approximately 6x larger payments. IAPP; NBC Chicago↩
The concept of "hypernudge" describes AI-powered, continuously adapting manipulation calibrated to individual emotional states. This builds on the behavioural economics concept of "nudge" but with real-time personalisation capabilities. Referenced in multiple academic discussions of AI ethics and digital manipulation. ↩
Chaudhary & Penn describe how AI tools can enable "a subtle choreography where technology anticipates our desires while shaping them." Cambridge University News↩
Sarah Wynn-Williams, testimony to US Senate, April 2025; detailed in her book Careless People (2025). See also: TechCrunch, "Meta whistleblower says company targeted ads at teens based on their 'emotional state'" (April 9, 2025),