Securing Agentic AI: Defining Permissions for Unpredictable AI Agents
.png)
A recap of Token Security’s live webcast with Itamar Apelblat and Ty Sbano
Token Security CEO and co-founder Itamar Apelblat recently sat down with Ty Sbano, Chief Information Security Officer (CISO) at Webflow, for a live webcast discussion on one of the most pressing challenges in enterprise security today: how to secure AI agents.
The session, “Securing Agentic AI: Defining Permissions for Unpredictable AI Agents,” explored how AI-driven autonomy is transforming identity security, why traditional access models are breaking down, and what organizations can do right now to build trust and guardrails around this new class of identities.
The Rise of Agentic AI and Its Security Implications
Itamar opened the discussion with a simple but profound question: What is an AI agent and how is its identity different from that of a human or a workload?
Unlike deterministic machine identities or role-defined human users, AI agents combine human-like creativity and autonomy with machine-like scale and continuity. They can act independently, interact across systems, and evolve their behavior based on context. That makes them both powerful and unpredictable, and a serious new security challenge.
“Agent identities are a hybrid,” Itamar explained. “They have the creativity of humans but the continuous action of machines. That’s why we need to address them as a new class of identities altogether.”
Moving from Action-Based to Intent-Based Permissions
Ty emphasized that traditional identity and access management (IAM) frameworks, which were built for humans and deterministic services, are no longer sufficient.
“We’ve built identity on a transactional model,” he said. “Itamar did this. He logged in. He did that. But in the agentic world, we have to start thinking about motivation, intent, and context, not just actions.”
In other words, access decisions need to evolve from being strictly rule-based (“Can this identity perform this action?”) to intent-aware (“Should this agent be doing this, and why?”). This shift demands a blend of fine-grained permissions, behavioral analytics, and contextual reasoning, presenting an evolution of Zero Trust principles into the AI environments.
When Identities Multiply: Accountability and Lifecycle Management
As organizations experiment with AI, they’re spawning countless new identities—agents running workflows, copilots assisting developers, and scripts acting on behalf of users.
Itamar noted that visibility and lifecycle management have quickly become critical pain points:
“We see a lot of agents that were created, tested, and then forgotten. They still have access to systems, but no one knows who owns them or what they’re doing.”
Ty agreed, warning that accountability and ownership must evolve alongside technology. “We’re all becoming managers of agents,” he said. “We need to understand who’s responsible for what those agents do and how we govern that responsibly.”
Both Itamar and Ty highlighted the need for clearer frameworks around agent onboarding, offboarding, and access scoping, especially as AI experiments proliferate across business units.
Security’s Balancing Act: Innovation vs Control
One of the strongest themes in the discussion was balance. Security leaders are under pressure to both enable innovation and mitigate new risks.
Ty described his own experience at companies like Vercel and Webflow, where the rush to adopt AI created tension between experimentation and safety. “It was ‘AI or die,’” he said. “But we had to ask: what are the secrets we’re protecting, and where’s our comfort level with data exposure?”
His advice to other CISOs: don’t block AI adoption, but set clear boundaries and expectations. Provide guardrails, not gates. “If you stop your teams from innovating,” he cautioned, “they’ll just do it elsewhere without visibility.”
Compliance, Context, and the Future of Identity
The conversation also touched on compliance and governance frameworks like ISO 42001, the new standard for responsible AI. While both agreed compliance is valuable, Ty noted that “great security doesn’t always mean compliance.”
Instead, organizations should focus on contextual enrichment, integrating data from IAM, SaaS platforms, and monitoring systems to build a complete picture of how agents operate and why.
Itamar added that intent-based permissioning will increasingly rely on these integrations: “It’s a shared responsibility between enterprises and SaaS providers. We need to connect authentication, authorization, and intent signals to make better decisions.”
Key Takeaways
- AI agents are a new class of identity, autonomous, goal-oriented, and unpredictable
- Intent-based permissioning is the next evolution of IAM, blending context and motivation with access control
- Visibility and lifecycle management are urgent priorities. Organizations must know which agents exist, who owns them, and what data they touch
- Accountability and ownership frameworks need to evolve as every employee becomes a manager of AI agents
- Security must enable innovation, not block it, by providing guidance and guardrails for safe experimentation
Watch the Full Discussion
This conversation only scratches the surface of securing the next generation of digital identities.
Watch the full recorded webcast on BrightTalk: Securing Agentic AI: Defining Permissions for Unpredictable AI Agents
.gif)
%201.png)





