The next wave of AI is here : AI Agents


VIEW IN BROWSER

Riley Coleman

August 2023

The next wave of AI is here
AI Agents

Are you ready?

In this issue

What are AI Agents?


The new mental model you need


Review of Salesforce's Agent Force

Each tech wave has redefined work as we know it. From the impact of email and instant messaging in the early 2000s, to the 2010s' mobile revolution that put powerful computers in our pockets.

AI has evolved even more. In 2015, it was Netflix-style recommendations. By 2020, we had self-driving cars. Then, in late 2022, ChatGPT exploded. It made AI accessible to everyone. Now, just as we're getting used to generative AI, a new wave is building: autonomous AI agents.

Unlike previous waves of automation, these AI agents are different. They think, decide, and act with unprecedented autonomy. Previous systems only followed rules or learned patterns. They don't just wait for commands or generate responses. They tackle complex workflows, make decisions, and complete multi-step tasks independently. And they're about to change everything.

The evidence is already emerging. Microsoft's Copilot and Salesforce's AgentForce are just the beginning. Early adopters like Wiley Publishing are seeing their AI agents autonomously resolve 40% more customer cases than previous systems. Over 10,000 autonomous agents have been deployed in businesses in the initial test phase.

This isn't a distant future - it's the new reality of work taking shape before our very eyes.


The Rising Tide: Understanding AI Agents

These aren't just small improvements. They mark a massive shift in how work gets done. AI agents are handling everything from complex customer service to sales. Companies like Saks are using AgentForce to enhance their luxury retail experience. This shows how AI can transform customer interactions across industries.

The technology isn't just automating tasks. It's autonomously managing workflows and making decisions that used to need human judgement.

Establishing an Agentic Mindset

Think of agentic thinking like being the director of a movie rather than just an actor following a script. An actor waits for direction. A director shapes the whole production. They make all the decisions and take responsibility for the outcome. This is exactly how we need to think about AI agents.

Using traditional AI tools is like being an actor reading lines. You input prompts and wait for responses. But with AI agents, you're the director. You set the vision, set boundaries, and guide performance. You allow for action within those boundaries.

The Director's Mindset in Practice

Just as a director needs to:

  • Set the overall vision for the production
  • Establish clear boundaries for each scene
  • Trust actors to interpret their roles while maintaining oversight
  • Provide feedback that shapes future performance
  • Take responsibility for the final outcome


You'll need to approach your AI agent with the same mindset:

  • Define the agent's overall purpose and scope
  • Set clear operational boundaries
  • Allow autonomy within those boundaries
  • Provide ongoing feedback and guidance
  • Take responsibility for the results


Lets take a look at one of the first Enterprise level AI Agent system - Salesforce's AI AgentForce. Salesforce has taken a Responsible by Design approach that deserves a look.

AgentForce: Setting New Standards for Human-Centred AI Agents

Unlike their predecessors, these digital colleagues don't just respond. They anticipate, act, and adapt. Salesforce's AgentForce leads this evolution. It sets new benchmarks for responsible AI use.

Early Results Show Promise

The numbers tell an interesting story:

  • Wiley saw 40% more cases resolved during their peak customer service period of back to school.
    Twice the accuracy of previous solutions (Disney's benchmark)
  • Over 10,000 agents deployed by early adopters
  • Some agents created by in just three-minute

Understanding AgentForce's Core Technology

Before diving into the assessment, let's clarify three key innovations driving AgentForce:

  • Einstein Trust Layer: Think of it as your organisation's ethical compass for AI. This framework ensures all AI operations meet privacy, security, and compliance standards. It acts as a guardian of responsible AI practices.
  • Atlas Reasoning Engine: The brain behind the operation. This system lets AgentForce make contextual decisions. It learns from patterns while keeping strict ethical limits. It's what allows these agents to move from simple task execution to nuanced problem-solving.
  • Dynamic Data Grounding: This approach means agents access only the data they need, when they need it. It's like how a human employee would only access relevant files for their current task. This enhances both security and efficiency.


With these technologies in place, we assessed AgentForce against the Human-Centred Trustworthy AI Principles. This evaluation goes beyond traditional metrics. It tests how well the system balances its power with responsible use.

Our scorecard looks at eight key areas. They are: Technical Foundation, Human Impact, and Governance & Privacy. Each area is scored out of 10, with clear paths to peak performance. Let's dive into what makes AgentForce a benchmark for responsible AI development.


The Score Card: Human-Centred Trustworthy AI Assessment (79/100)

Technical Foundation

Safety and Security (8/10)

It's about guarding against threats. We must ensure the AI agent can't accidentally cause harm through its actions.

What Works:

  • Comprehensive guardrails limiting agent actions
  • Built-in toxicity detection
  • Regular security checks

Areas for Enhancement (how to get 10/10):

  • Enhanced real-time threat monitoring

Robustness and Reliability (8/10)

Just like you want your car to start reliably every morning, This principle ensures the system works well under various conditions and recovers smoothly from any hiccups.

What Works:

  • Atlas Reasoning Engine for consistent performance
  • Extensive testing across various use cases
  • Regular performance monitoring
  • Built-in error handling

Areas for Enhancement (how to get 10/10):

  • Automated stress testing
  • Self-healing capabilities
  • Predictive maintenance systems

Human Impact

Fairness and Non-discrimination (8/10)

AI agents need to make decisions without bias or prejudice based on things like gender, race, or age.

What Works:

  • Advanced bias mitigation through Trust Layer
  • Regular testing to catch and correct biases
  • Clear guidelines preventing use of sensitive variables
  • Default protective policies

Areas for Enhancement (how to get 10/10):

  • More transparent reporting on bias testing results
  • Additional third-party validation of fairness measures

Transparency and Explainability (7/10)

Just as you'd expect a colleague to explain their reasoning. AI agents should be clear about how they make decisions.


What Works:

  • Clear AI disclosure
  • Detailed audit trails
  • Topic classification for decisions
  • Integration with Prompt Builder

Areas for Enhancement (how to get 10/10):

  • Open-source key components
  • Real-time decision path visibility
  • Detailed model documentation

Beneficial Use and Social Good (7/10)

Technology should make the world better, not just more efficient. This principle checks if the AI agent benefits society and improves lives.


What Works:

  • Focus on improving workplace productivity
  • Environmental considerations in design
  • Commitment to ethical development

Areas for Enhancement (how to get 10/10):

  • Specific social impact metrics
  • Programs for non-profit applications
  • Clearer sustainability goals

Governance & Privacy

Oversight and Governance (8/10)

Think of this as having good management practices for AI. It ensures there are clear rules, responsible oversight, and proper checks and balances in place.


What Works:

  • Dedicated Office of Ethical & Humane Use
  • Customer AI Ethics Board for independent oversight.
  • Clear ethical guidelines
  • Regular policy reviews
  • Stakeholder engagement

Areas for Enhancement (how to get 10/10):

  • Expanded public reporting on AI governance
  • Broader stakeholder engagement mechanisms


Privacy and Data Protection (9/10)

Just as you wouldn't want a colleague sharing your personal information around the office. AI agents need to handle sensitive data with care. It is about keeping user data safe. It must only be used for its intended purpose.


What Works:

Einstein Trust Layer ensures zero data retention

  • No third-party access to customer data
  • Dynamic data grounding (only accessing what's needed)
  • Secure data retrieval mechanisms

Areas for Enhancement (how to get 10/10):

  • End-to-end encryption for all interactions
  • More granular user controls

Compliance with Laws and Regulations (9/10)

Just as restaurants must follow food safety rules, AI systems must comply with laws and regulations. This ensures they operate safely and ethically.


What Works:

  • Strong integration with existing Salesforce compliance frameworks
  • Regular audits and monitoring of regulatory requirements
  • Clear documentation of compliance measures
  • Proactive approach to emerging regulations
  • Integration with existing metadata, permissions, and sharing models

Areas for Enhancement (how to get 10/10):

  • Expansion of proactive regulatory monitoring system

The Bigger Picture

AgentForce shows us a crucial truth about AI's future. Powerful capabilities and ethical principles can coexist. There's room for improvement. But, they've set a benchmark for responsible AI agent development. Others should follow it.

Staying Afloat: Adapting to the New Reality

How do we prepare for this transformative wave? First, let's dispel a common myth: you don't need to become an AI expert to thrive in this new landscape. What's crucial is understanding how to collaborate effectively with these digital colleagues.

Think of it like learning to drive - you don't need to be a mechanic to be a good driver. What you need is:

  • Digital literacy: Understanding the basics of how AI agents work
  • Critical thinking: Knowing when and how to deploy AI agents effectively
  • Emotional intelligence: Focusing on the human elements that machines can't replicate
  • Adaptability: Being ready to evolve as the technology develops

Looking Ahead: The Horizon

The implications of this wave are profound but not overwhelming. Early results from companies like OpenTable show that AI agents can enhance humans. They let teams focus on complex, high-value work while AI handles routine tasks.

For businesses and professionals, the message is clear: this wave is both an opportunity, and a threat. The key is to focus on building your foundational skills that allow you to effectively collaborate with AI.

If you lack confidence in using AI systems now, you must learn now and do it quickly. The gap is widening between those who can work with AI and those who can't. Start
small, but start now.


If you are interested, we have a few places still left in cohort starting end of January.

AI Foundations Course


Black Friday Special :
Offering all readers a 50% discount*

*When you book & pay by 20 Dec

The future workplace isn't about human versus machine. It's about humans and machines, each playing to their strengths to achieve more than either could alone. The wave is rising - and with the right approach, we can all learn to surf it successfully.


How can this newsletter be of most value to you?

I want to ensure that this newsletter really valuable for you every week. To do that I would love your input on what you'd like to see included.


Until next time... Take it easy.

Riley

Share this issue with your friends

Brouwersgracht 1013HG Amsterdam Netherlands
Unsubscribe · Preferences