Blog
May 05, 2026 | 5 min

CISA Releases Guidance to Help Organizations Secure Agentic AI. The Need to Rethink Your Defenses Is Urgent

Last week, CISA, the NSA, and cybersecurity international agencies from Australia, Canada, New Zealand, and the UK published joint guidance on agentic AI security. Buried inside the careful government language is something remarkable: an implicit acknowledgment that the security model most organizations are currently relying on for AI was built for a different problem.

The framework era is over.

The agencies are direct about the limitation. Existing frameworks like OWASP's LLM Top 10 and MITRE ATLAS were built for large language model vulnerabilities and platform misuse. They were not built for agents that autonomously plan tasks, call APIs, modify files, escalate privileges, and take actions across interconnected systems, often without a human in the loop.

This is the gap most organizations don't see until it's too late. They deployed AI security controls designed for chatbots and applied them to agents. Those controls filter prompts,set behavioral guardrails, and flag anomalies after the fact. None of that is sufficient when an agent has already executed twelve API calls, modified access controls, and deleted the audit trail before anyone noticed.

What the guidance actually says.

Strip away the framework language and five risk categories, and the CISA guidance makes one underlying argument. The primary security failure mode for agentic AI is unconstrained identity and access. This leads to critical systemic risk across an organization’s AI infrastructure, including:

  • Privilege risk: Agents with too much access
  • Design and configuration risk: Poor boundaries between what agents can and cannot touch
  • Behavioral risk:  Agents pursuing goals in ways designers never intended
  • Structural risk: Cascading failures across interconnected agent networks 
  • Accountability risk: Decisions made through processes no one can inspect

At its root, every one of those risks is an identity and intent problem. Who is the agent? What is it allowed to do? Is what it's actually doing consistent with its stated purpose? When did that change?

The agencies recommend that each agent carry a verified, cryptographically secured identity that includes short-lived credentials, encrypted communication, and continuous runtime authentication. That is not a guardrail. That is an identity control plane.

Where we'd go further.

The guidance recommends integrating agentic AI security into existing frameworks: zero trust, defense-in-depth, and least privilege. We agree with the direction. But we'd make the structure explicit.

Zero trust is a security approach. Identity is the mechanism that makes it enforceable. When a human employee requests access to a sensitive system, you verify their identity, check their role, and decide whether the action is authorized. When an AI agent does the same thing thousands of times per hour, the security model should be no different.

The question is not "did the agent behave strangely?" The question is "does this action, from this agent, in this context, reflect scoped intent?" That question can only be answered at the identity layer. Guardrails tell you something went wrong. Identity-based controls prevent it from happening.

What this means for your organization.

As you're deploying agentic AI, the CISA guidance gives you a practical starting point:

  • Audit what your agents can access
  • Establish verified identities
  • Enforce least-privilege at the agent level, not just the system level

But the deeper question is whether the security tools you are using today were designed for agents or retrofitted for them. Prompt filtering and behavioral monitoring were built for a world where AI responded. Agentic AI acts. The security model has to act first.

Six government agencies from around the world just told you that identity management is the most specific, technically actionable recommendation they have for securing AI agents. Token Security is delivering an identity-first AI security solution that solves this challenge for enterprise organizations. The category is real. The urgency is here. 

To learn more about Token Security, request a demo today and we’ll show you how you can get control of your agentic AI security challenges today.

Discover other articles

Be the first to learn about Machine-First identity security