|
5 AI Movie Predictions That Came True in 2024
And whats coming in 2025
|
|
|
G’day All I just wanted to write a quick note to say thank you for joining my newsletter and I hope you continue to find value in the coming year. Read to the bottom for Announcement for whats coming in 2025 and get a special Christmas Gift just for my Newsletter readers. Enjoy your holidays - maybe catch one of these movies mentioned below. |
|
|
Hollywood’s Crystal Ball: Not So Far Off After All
Back in 2013, "Her" showed us a world where people develop deep emotional relationships with AI. In 2024, thousands of users reported distress when the Replika AI app restricted its romantic features. Not quite Joaquin Phoenix levels of drama, but enough to make psychologists worldwide raise serious concerns about AI emotional dependency.
In "Minority Report," Tom Cruise fought a system. It used algorithms to arrest people before they committed crimes. Science fiction? In 2024, several US police departments used AI-based predictive policing systems. This sparked debates about bias, privacy, and civil rights. The EU was so concerned it banned some predictive policing in its new AI laws.
|
|
|
Real World, Real Consequences
When Workplace Surveillance Went Too Far
The Woolworths case in Australia became a wake-up call for workplace AI oversight. Imagine this: a $140 million AI system monitors every move of warehouse workers. It tracks their speed, efficiency, and even bathroom breaks. It sounds like something from “1984,” but it was happening in 2024.
Workers faced algorithmic performance management. It could decide their job security based on computer metrics. Their response? A 17-day strike that made international headlines. As union leader Kennedy put it, this was a victory for workers over a company “treating them like robots.”
“They have challenged one of the most significant threats to worker safety and wellbeing as we enter a new AI-surveillance era of work. And they have won,” Kennedy declared.
The victory set a precedent for worker rights in the AI era. It included removing punitive digital surveillance and protections against AI-based disciplinary actions.
The Hidden Dangers of Algorithmic Bias
While regulations focus on obvious harms, experts warn about subtler dangers. Dr. Timnit Gebru, of the Distributed AI Research Institute, raises a crucial concern:
“There’s a real danger of systematising the discrimination we have in society [through AI technologies].”
As AI becomes ubiquitous, Gebru argues for transparency:
“What I think we need to do — as we’re moving into this world full of invisible algorithms everywhere — is that we have to be very explicit, or have a disclaimer, about what our error rates are like.”
As AI systems make more decisions in our lives, the call for transparency has become urgent.
|
|
|
Predictive Policing: When Fiction Meets Reality
In the US, the debate over predictive policing grew. More police departments were using AI to predict where crimes might occur. These systems used historical crime data and complex algorithms. They raised alarming questions: Were they perpetuating existing biases? Were they leading to over-policing of certain communities?
The controversy echoed Hollywood’s warnings about automated justice systems. It had real effects on communities across America.
|
|
|
Meanwhile, in China…
China took a different approach. It launched its AI Security Governance Framework 1.0. The West focused on individual rights and privacy. China emphasised national security and social harmony.
It required companies with over 1 million users to have ethical AI tests. This affected tech giants like Alibaba and Tencent. It a good reminder that AI governance isn't one-size-fits-all. Different societies are choosing different paths.
|
|
|
The EU Takes the Lead
The European Union implemented the world's first comprehensive AI law in August 2024. Think of it as a "rulebook" for AI, categorising different AI systems based on their potential risk to people's rights and safety. For example:
- High-risk AI systems (those used in hiring, education, or healthcare) need strict oversight
- Some AI uses are completely banned (like social scoring systems that rate citizens' behavior)
- Companies breaking these rules face massive fines - up to €35 million or 7% of their global revenue
What's "High-Risk AI" Anyway?
When we say "high-risk AI," we're talking about AI systems that could significantly impact people's lives. Think:
- AI deciding whether you get a loan
- AI systems screening job applications
- AI in medical diagnosis
- AI in educational assessment
These systems get extra scrutiny because their decisions can seriously affect people's opportunities and rights.
The Regulators Strike Back
The European Union didn’t just talk about AI regulation – they brought out the big guns. How serious? Just ask Meta. It got a €251 million fine for a data breach, then another €797 million for antitrust violations.
Even X (formerly Twitter) faced fines up to 6% of its global turnover. And that was before the AI Act’s even bigger penalties – up to €35 million or 7% of global revenue. Suddenly, ethical AI wasn’t just good practice; it was a business necessity.
|
|
|
Big Companies Step Up
The UNESCO Business Council for Ethics of AI (think of it as a global taskforce for responsible AI) expanded in 2024. Major companies like Microsoft, Telefónica, and glad to see my old company Telstra joined as Australia's first participant.
It wasn't just talk. These companies committed real resources to ensure AI respects human rights, privacy, and social values.
- Create practical tools for assessing AI ethics
- Share best practices for responsible AI development
- Develop guidelines for AI that respects human rights
- Work on ways to make AI more transparent and accountable
|
|
|
Looking Ahead: 2025’s Brave New World
The coming year brings critical changes. February sees the EU’s ban on certain AI practices kick in. By August, new rules for big AI models like ChatGPT take effect. Australia’s rolling out its own AI guardrails, focusing on protecting workers and consumers.
Companies worldwide are scrambling to get their AI systems in check. For businesses, especially smaller ones, this means:
- Mandatory risk assessments for AI systems.
- New documentation requirements.
- Privacy protection measures.
- Regular system audits.
- Staff training on AI ethics.
The Bottom Line
We’re not in a sci-fi movie. But, 2024 showed that many of Hollywood’s warnings about AI were valid. As Dr. Gebru suggests, the real challenge isn’t just avoiding harms. It’s ensuring our AI systems don’t silently encode and amplify society’s biases.
The good news?
Unlike in the movies, we’re not waiting for things to go wrong before acting. Between worker pushback, serious regulations with real teeth, and growing corporate responsibility, we’re seeing the first real attempts to ensure AI enhances rather than diminishes our humanity.
Just don’t expect to fall in love with your operating system anytime soon. Though given how 2024 went, maybe we should keep an eye on that too.
|
|
|
Thank you for joining me this year
I have been on a journey to build awareness and capabilities around Responsible AI. This is a quick look inside what i have planned for 2025
Learn more - Check out here with 80% saving for my Newsletter Readers.
Happy festive season & see you in 2025
Riley
|
|
|