Your Digital Footprint Matters More Than Ever


VIEW IN BROWSER

Riley Coleman

August 2023

The Great AI Ethics Retreat

In this issue

What happened


What you can do


f

Happy New Years!

I know seems kind of late to be saying it, but its the first newsletter in 2025.

I hope you all had an awesome festive season.

Let's kick off this year, with some concerning news.


I've noticed a troubling shift in the last couple of months. It keeps me awake at night. The new AI guardrails to protect our digital lives are being dismantled, piece by piece, often so subtly that many haven't noticed. But the implications are profound, and I believe we need to talk about it.


The Quiet Unravelling

Picture this: You're living in a house. Someone is coming in at night and, bit by bit, removing the locks from your doors and windows. That's essentially what's happening with our digital lives right now. Let me share what I've observed:


The Domino Effect

  • OpenAI, once a stalwart of ethical AI, has opened the doors to military applications
  • Meta's content moderation teams have been hollowed out
  • X (formerly Twitter) now claims mandatory rights to use your data for AI training


These aren't isolated incidents. They're part of a coordinated retreat from ethical AI practices. It's to save money and loosen regulations. We should all be concerned.

Why This Matters Now

The timing isn't coincidental. With possible new rules and a changing political scene, tech giants seem to be prepping for a world with fewer limits. It's like a choreographed dance. Each company's step back lets others to follow.

The Global Ripple Effect

While sitting in my favourite café in Chiang Mai last week, I was catching up on AI news. I realised something striking. Despite the EU's strong GDPR and AI Act, tech giants ignore regional privacy laws. They see the multi-million dollar fine as a cost of doing business. They are willing to pay it to get around, not to respect rights.


A Tale of Two Approaches

But here's where it gets interesting. Not everyone's joining this race to the bottom. Let me share a contrast that gives me hope:


The Ethical Pioneers

Companies like Anthropic and Mistral are proving that responsible AI development isn't just possible. It's profitable. They're showing that innovation doesn't require compromising our values.


The Traditional Giants

Meanwhile, traditional tech giants are making choices that prioritise speed over safety:

  • Meta's reduction in fact-checking capabilities
  • X's expanded data collection powers
  • OpenAI's has had most of their safety resign

What Can You Do?

After spending years in this space, I've learned that our power lies in our choices. Here's what I recommend:

Practical Steps:

  1. Audit Your Digital Footprint
    • Review which platforms have your data
    • Check privacy settings on existing accounts
    • Consider alternative platforms that align with your values
  2. Make Informed Choices
    • For chat AI: Consider switching to Claude or Mistral
    • For social media: Explore privacy-focused alternatives
    • For Meta platforms: Review and restrict data sharing

Looking Forward

The next two years are crucial. They will determine if we get AI systems we can trust or ones that spread bias and misinformation. As someone deeply embedded in this space, I believe we're at a crossroads.

The future isn't written yet. Every time you choose where to share your data, you're casting a vote for the kind of AI future you want to see. It's why I've become more vocal about these issues. We must have these conversations now.

What are your thoughts on these changes?
How are you adapting your digital habits in response?


Brouwersgracht 1013HG Amsterdam Netherlands
Unsubscribe · Preferences