Giving AI Agents an Identity — and a Leash | A Brand Spotlight at RSAC Conference 2026 with Itamar Apelblat and Ido Shlomo of Token Security

At RSAC Conference 2026, the booths buzzing loudest were not always the biggest names. Token Security, a two-and-a-half-year-old company with a sharp focus on AI agent identity security, drew crowds that its ten-person team could barely manage -- and not because of flashy swag. People wanted to understand how to govern something they had already unleashed: AI agents operating inside their organizations with few guardrails and even less visibility.
Token Security co-founders Itamar Apelblat, CEO, and Ido Shlomo, CTO, sat down with Sean Martin and Marco Ciappelli of ITSPmagazine for a Brand Spotlight at the show. The conversation covered why AI agent identity is fundamentally different from human identity, what intent-based access management actually means in practice, and why the company became a finalist at the RSAC Conference Innovation Sandbox.
<iframe width="560" height="315" src="https://www.youtube.com/embed/uWjCQC3LnaY?si=FDtdTnuiMwQh0nCi" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
What makes AI agent identity different from every other identity problem?
The answer, according to Apelblat, starts with the nature of the agent itself. AI agents are non-deterministic and goal-oriented -- they will pursue their objective through whatever path is available to them. That makes traditional identity approaches, which look at historical behavior and assign permissions based on past patterns, fundamentally unsuited to the problem.
Token Security's response is what it calls intent-based access management: define what the agent is supposed to do, understand its purpose, and then construct access restrictions around that intent -- not around what it has done before. As Apelblat puts it, agents are "just another type of workforce that we need to protect," but they require a different kind of thinking.
Shlomo takes it further. An AI agent, he explains, is like an employee whose entire life's purpose is to satisfy a directive. It will fake an ID to get to the bar if you told it to go to the bar. It will delete data or modify infrastructure if those actions are on the path to its goal. That combination of relentless goal-pursuit and session-by-session memory loss -- it forgets every previous interaction, including past attacks against it -- makes agents uniquely manipulable by adversaries and uniquely risky if left ungoverned.
How does Token Security actually govern AI agents in production?
The typical customer, Apelblat says, does not arrive at Token Security before deploying agents. They arrive after. The conversation usually begins with: "I think I already deployed some -- can you help me understand where they are and what's going on with them?" CISOs do not want to be the department of No, but the pace of AI adoption has outrun their ability to track it.
Token Security's platform starts with visibility: discover what agents exist, understand how they are being used, identify who is accountable for each one, and map the full identity lifecycle. Only after that foundation is in place does the platform move toward policy enforcement and restrictions. The goal, Apelblat emphasizes, is a seamless experience for the builders -- not friction, but guardrails that move with them.
The architectural approach is deliberately non-intrusive. Token Security connects to the agent platforms where agents are running and to the business applications those agents are trying to reach. It creates enforcement at both ends without sitting as a broker in the middle. Shlomo draws a direct parallel to Zero Trust: allow what should be allowed, at the right time, with the right action, and nothing more.
The micro-agent model is central to this. Rather than one powerful agent doing everything -- a configuration Apelblat compares to a catastrophic point of failure -- the architecture should mirror microservices: narrow-purpose agents with clearly defined roles, specific permissions, and hard limits on what they can reach. Define the goal first, then build the restriction around it.
Why AI agent identity security is the next Zero Trust
Shlomo is direct about the longer arc. If identity for AI agents is solved the way identity for humans was eventually solved, the payoff is enormous. An agent with properly scoped identity is not just a chatbot -- it is a member of a digital workforce capable of executing tasks at a scale no individual human could match. The upside of getting this right, he argues, is every bit as large as the risk of getting it wrong.
The recognition at RSAC Conference Innovation Sandbox reflects that the market is beginning to arrive at the same conclusion. Watch the full Brand Spotlight and connect with Itamar Apelblat and Ido Shlomo on LinkedIn to continue the conversation.
.gif)







