Blog
Nov 25, 2025 | 4 min

When an AI Agent Logs In: A Zero Trust Story Every CISO Is Now Living

The moment it happens, you feel it.

Somewhere in your environment, an AI agent just authenticated into a real system, not a sandbox, not a demo tenant, but a production asset your business relies on. It pulled data. It made a decision. It executed an action. And, it did all of this without a human touching a keyboard.

That’s the moment AI stops being a novelty project and becomes something far more consequential: a new kind of identity operating inside your organization.

This is a shift that many CISOs are experiencing right now and was the backbone of our recent webinar, Zero Trust for Autonomous Agents: Extending Identity-First Access Control, featuring Token Security CTO and Co-Founder Ido Shlomo and Numberline Security Founder Jason Garbis.

Zero Trust Meets Its Most Unpredictable User Yet

Jason began with the familiar evolution of Zero Trust: a move away from perimeter-based trust toward a strategy grounded in identity, least privilege, and continuous verification. It worked first for humans, then (with effort) for machines.

But AI agents don’t fit either category.

They behave like humans, reasoning, chaining tools, making decisions, yet operate with the speed and scalability of machines. And, because many agents are deployed quickly to enable business productivity and efficiency, they often operate with:

  • Shared service accounts
  • Long-lived API tokens
  • Credentials scattered across clouds, apps, and data stores

It’s productivity on the surface, but leads to exposure underneath.

The Identity With No Name

A core challenge emerged during the discussion: most AI agents have no formal identity of their own.

If you ask a CISO today, “which agent performed this action?” they often can’t answer. The logs show a backend token or service account, but not the specific agent, nor which instance of it.

Without defined identities and process for AI agents:

  • Least privilege becomes impossible to define
  • Behavior is hard to baseline
  • Auditing becomes guesswork
  • Decommissioning agents leaves orphaned access behind

It’s the worst kind of risk: invisible and fast-moving.

Ido added that organizations must treat agents the way we eventually learned to treat humans and machines with identity stacks, governance models, and lifecycle controls. Without that, Zero Trust falls apart before it even starts.

AI Needs a Playground, Not a Prison

Zero Trust says “never trust, always verify,” but applying this principle to agents creates a contradiction. Lock an agent down too tightly and it loses its value, no ability to reason across systems or pursue complex workflows. Give it too much access and you’ve created the perfect overprivileged insider operating at machine speed.

Jason and Ido landed on a compelling middle ground: Agents don’t need rigid cages, they need well-defined playgrounds.

A playground has:

  • Clear boundaries
  • Freedom inside those boundaries
  • Identity-driven alarms when an agent steps outside its domain

This shifts the question from “How do we stop the agent from acting?” to “How do we define where the agent is allowed to act, and ensure it stays there?”

It’s still Zero Trust, but adapted for autonomy.

Delegation and the Identity Problem

One of the more nuanced challenges emerges when humans delegate their access to agents.

Picture a customer talking to an AI agent at a bank. The agent acts using:

  • Its own backend identity
  • The customer’s identity and context
  • System-level service accounts on the backend

When it updates the customer’s address or retrieves account data, who actually performed the action? And which identity should be audited?

Without a clear agent identity model, CISOs are stuck sorting through identity blending and opaque logs. This is where Ido sees a new need: agent-specific identity governance, complete with intent definitions, behavior baselines, and safe decommissioning workflows.

When Secrets Strike Back

The conversation ended on a topic most CISOs hoped was behind them: long-lived credentials.

We’ve spent years moving humans toward passwordless, yet agents today often rely on static API keys, which are easy to leak, hard to rotate, and frequently over-scoped.

If agents are going to operate safely and securely, organizations must adopt:

  • Short-lived tokens
  • Automated rotation
  • Vault-based access
  • Dynamic scoping tied to agent intent

Otherwise the attack surface grows quietly, beneath the business benefits of AI automation.

The Path Forward

In the webinar, Ido and Jason made it clear: Zero Trust does apply to autonomous agents, but only if we rebuild the identity model beneath them.

Agents need owners.
They need boundaries.
They need traceability.
They need modern secret management.
And, they need continuous verification based on identity, not assumptions.

For CISOs, this isn’t a theoretical future. It’s already happening.

Watch the Full Conversation

If your teams are experimenting with AI agents or racing to adopt them, the full webinar goes even deeper into practical recommendations for securing them without slowing innovation.

Watch: Zero Trust for Autonomous Agents: Extending Identity-First Access Control
View the on-demand webinar: https://www.token.security/assets/webinar-zero-trust-for-autonomous-agents-extending-identity-first-access-control

Because the moment an AI agent logs in, you’re no longer securing just systems or humans, you’re securing a new identity type. Zero Trust needs to evolve with it.

To learn more about how Token Security helps organizations with AI agent identity security, request a demo today.

Discover other articles

Be the first to learn about Machine-First identity security