Autonomous, But Not Controlled: The Illusion of AI Agent Governance

Over the last year, something subtle, but significant, has happened inside enterprise environments. AI agents have quietly moved from experimentation to production.
They are no longer confined to innovation labs or proof-of-concept workflows. They are writing code, moving data, orchestrating processes, and interacting with critical systems. In many organizations, they are already embedded in the day-to-day operations of the business.
And with this shift, a new assumption has taken hold: that these agents are understood, visible, and under control. It’s a reasonable belief. After all, enterprises have spent decades building governance frameworks for users, applications, and infrastructure. It’s natural to assume those same controls extend to AI.
But when we partnered with the Cloud Security Alliance to examine how organizations are actually managing AI agents, we found something very different.
The perception of control is there. The reality is not.
The Confidence Gap No One Is Talking About
The CSA surveyed 418 IT and security professionals for our new report, Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises. The results confirmed what we've suspected.
While 68% of organizations believe they have strong visibility into the AI agents running across their environments, the next result tells a different story. In the same survey, 82% admitted they had discovered at least one AI agent or autonomous workflow created entirely without the knowledge of their security, IT, or governance teams. And for 41% of respondents, this wasn't a one-time surprise. It happened multiple times.
That's not visibility. That's a confidence gap with real consequences.
Shadow AI agents are proliferating in exactly the places you'd expect: internal automation and scripting environments (51%), LLM platforms including custom tools and plugins (47%), SaaS tools with built-in automation (40%), and developer-created workflows (40%). These environments are built for speed. They're designed for decentralized experimentation. And, they're producing a new category of ungoverned identity that no legacy IAM tool was ever designed to handle.
The uncomfortable truth is that the places where enterprises are most actively deploying AI agents are the same places where shadow agents are quietly multiplying. Legitimate adoption and uncontrolled sprawl are happening in the same infrastructure, in parallel, often invisibly.
AI Agents Don't Retire. They Accumulate.
Even when an enterprise does know about an AI agent and even when it was properly provisioned, documented, and reviewed, there's a second, slower problem building beneath the surface. Call it AI agent retirement debt.
The survey found that while 68% of organizations conduct periodic permission reviews and 52% have defined creation or onboarding processes, only 21% have any formal decommissioning process in place. Just 19% express high confidence that agents are fully retired when they're no longer needed.
Think about what this means in practice. An agent is created for a specific project. The project ends. The team moves on. But the agent doesn't. It continues to exist, somewhere in your environment, holding credentials, permissions, and API keys that no one is monitoring and no one is revoking.
Unlike provisioning risk, which is visible at creation, retirement risk accumulates quietly. It compounds over time. An agent that was low-risk in January may be sitting on production database access in December, long after the human who created it has forgotten it exists or has left the organization. Multiply this across hundreds or thousands of agents and you have a structural exposure that isn't showing up in your dashboards.
Security failures rarely happen at the moment of creation. They happen over time. And AI agents compress that timeline dramatically.
Two-Thirds of Enterprises Have Already Felt the Impact
If the visibility gap and retirement debt felt abstract, here's what brings it into sharp focus: 65% of enterprises experienced a security incident involving an AI agent or autonomous workflow in the past 12 months. Not a near miss. Not a theoretical risk. An actual incident.
And even more critically, not a single respondent reported zero material business impact. Every organization that experienced an incident felt it somewhere real:
- 61% reported data exposure or mishandling of sensitive data
- 43% experienced disruption to business operations
- 35% incurred direct or indirect financial costs
- 41% dealt with incorrect or unintended actions inside business processes
- 31% faced delays to customer-facing or internal services
These are not numbers that belong in a threat advisory buried in the IT organization. These are numbers that belong in a board presentation.
Data exposure at 61% is a regulatory and reputational event. Operational disruption at 43% is a resilience and continuity event. Financial losses at 35% are a direct hit to the business. When autonomous systems operating without proper identity controls go wrong, they don't fail quietly; they cascade into much bigger problems.
And yet, despite this incident rate, most organizations are still monitoring AI agents only periodically rather than continuously. Only 16% have implemented real-time, continuous monitoring. The rest are relying on point-in-time oversight while agents operate at machine speed.
The gap between detection and impact is widening, not closing.
Why This Deserves Board-Level Attention
There's a pattern in enterprise security where a risk is understood at the technical level for years before it reaches the boardroom. By then, the exposure is structural and the remediation is expensive. AI agent security is following that same trajectory, but faster.
AI agents today write code, execute transactions, access sensitive data, call APIs, interact with customers, and make decisions without human review. They are not just a productivity enhancement layered on top of existing systems. They are autonomous actors operating with real identities such as cloud roles, API tokens, OAuth grants, service accounts, and secrets. And as the CSA survey makes clear, most enterprises do not have the controls in place to govern those identities across the full lifecycle.
When one in three enterprises is reporting financial losses tied to AI agent incidents, and when six in ten are experiencing data exposure, this is no longer a question of technical hygiene. It is a question of enterprise risk and compliance violations. It belongs alongside ransomware, third-party risk, and cloud security on the board agenda, not because it might become a problem, but because it already is one.
The Path Forward: Identity Is the Only Control Plane That Works
The instinct in much of the industry has been to respond to AI agent risk with guardrails: prompt filtering, output constraints, and behavior monitoring. These approaches operate at the wrong layer. They attempt to constrain what an agent does after access has already been granted. But once an AI agent has credentials and connectivity, a single misstep can trigger cascading failures across every system it touches.
You cannot prompt-engineer your way out of identity risk.
The only control plane that works is the layer that spans every system an agent touches, which is identity. Who owns this agent? What is its intent? What can it access? Under what conditions? For how long? With what privileges? These are the questions that matter. And they are exactly the questions that traditional IAM programs were never designed to answer for non-human identities operating at machine speed.
This is the problem Token Security was built to solve.
Token Security provides enterprises with the platform to secure and govern AI agents the way they were always meant to be governed: as first-class identities. Our approach covers the full lifecycle:
Discover. You cannot secure what you cannot see. Token automatically discovers and inventories every AI agent, custom GPT, coding agent, MCP server, and non-human identity that is operating across your cloud environments, SaaS platforms, internal tools, and custom frameworks, including the shadow agents your teams don't know about.
Understand. Discovery alone isn't enough. Token analyzes identity, behavioral logs, and telemetry data to give security teams contextual awareness: what is this agent's intent, who owns it, and what can it access? By understanding agent intent, we enable least-privilege enforcement that is scoped to purpose, not inherited from the human who created it.
Enforce. Token enables security and identity teams to move from ad-hoc responses to formal governance based on intent-aware access policies. We enforce ownership and accountability, continuously right-size permissions, and log every agent action for compliance evidence and forensic investigation. When an agent is no longer needed or drifts out of its policy, Token can automatically remediate the issues and mitigate risk.
The CSA survey found that 79% of organizations see context-aware, intent-based controls as important or very important over the next two years. The direction is clear. The urgency is real. The incidents have already started.
The Moment to Act Is Now
AI agent adoption is not slowing down. It is rapidly accelerating. The same capabilities that make them transformative, such as autonomy, speed, connectivity, and adaptability, are exactly what make them dangerous without proper governance.
The enterprises that win in the years ahead will not be those that slow AI adoption out of fear. They will be those that accelerate innovation with confidence, because they have the controls in place to know what every agent is, what it can do, what it's currently doing, and when it's time to retire it.
Security should enable AI innovation, not constrain it. But enabling innovation requires control. And, control, at scale, requires identity.
AI agents are scaling. Governance must scale with them.
To read the full findings from the CSA survey, download Autonomous but Not Controlled: AI Agent Incidents Now Common in Enterprises at https://cloudsecurityalliance.org/artifacts/autonomous-but-not-controlled-ai-agent-incidents-now-common-in-enterprises
.gif)
%201.png)





