A Year of Protecting Claude Code: The Identity Problem No One Was Ready For

Almost 80% of the Fortune 100 have already deployed enterprise AI. No longer experimental tools, AI agents are now operational infrastructure. In less than three years, Anthropic’s run-rate revenue is $14 billion, with this figure growing over 10x annually in each of those past three years.
A year ago, most enterprise security teams were debating whether to allow AI coding assistants at all. Today, that debate is over. Claude Code is running inside developer terminals across some of the largest companies in the world. Not in a browser. Not in a sandbox. But in production environments with real access.
- It can read .env files.
- It can call pre-authenticated CLIs like aws, gcloud, and kubectl.
- It can use stored SSH keys and cached API tokens.
- It can connect to internal and third-party systems through MCP integrations.
And, it does all of that with the full permissions of the developer who launched it.
As we work with enterprises deploying Claude Code, one pattern has become clear. The biggest risk isn’t the model.
It’s identity.
The Access Surface You Didn’t Map
When security teams evaluate AI tools, they often ask: What is the model allowed to do? That’s the wrong starting point. Claude Code doesn’t come with its own tightly scoped service account. It doesn’t operate inside a neatly sandboxed runtime by default. It inherits the developer’s identity. Whatever the developer can access, the agent can access:
- Cloud IAM roles
- Kubernetes clusters
- Production databases
- Git repositories
- SaaS applications
- Secrets stored locally
- Active CLI sessions
There is no built-in privilege separation layer between human and agent. If a developer has admin rights in AWS, the agent effectively does too. From a security perspective, this is a fundamental shift. We are used to managing blast radius by scoping machine identities tightly. With AI agents, we gave agents the power to act, but we didn’t give security teams a clean way to separate or control that power.
The access surface isn’t “what Claude supports.” It’s everything the developer already touches.
The Prompt Injection Distraction
Much of the public conversation about AI security has focused on prompt injection. The idea that attackers can trick models into revealing data or executing unintended actions. Prompt injection is real. In fact, research consistently shows high success rates, often 85% or more, against state-of-the-art defensive prompting techniques.
But even if prompt injection were solved tomorrow, the core risk would remain. The issue isn’t just what the model thinks. It’s what the agent can do.
If an AI agent is operating inside a terminal with access to cloud credentials, database connections, and internal systems, you cannot rely on “better prompts” as your primary defense strategy. Security teams don’t protect production systems by trusting code to behave. They enforce least privilege, monitor activity, and ensure attribution.
AI agents need to be treated the same way as untrusted processes running inside trusted identity contexts.
Platform Visibility Isn’t the Whole Story
Anthropic has built meaningful enterprise controls around Claude Code. The Compliance API gives organizations visibility into usage data and content. The Analytics API tracks sessions, tool usage, generated code, and cost attribution. Managed policies allow governance over tool permissions and MCP configurations. These controls are important as they provide transparency and oversight within the Anthropic platform itself.
AI agents are rapidly becoming embedded in enterprise workflows, but platform-level visibility only shows part of the picture. Anthropic can tell you what happened inside the AI session. Your EDR can tell you which processes ran on the endpoint. Your cloud logs can tell you which IAM role called which API. What none of these systems can easily answer is the critical question:
Which agent used which credentials, on which device, to access which data, on whose behalf?
Was that S3 bucket access triggered by a human typing a command or by an AI agent acting within that session? Was that database query part of an automated agent workflow? Did an MCP integration pull sensitive data because the model reasoned its way there? Without explicitly modeling agents as identities, attribution becomes blurry. And, blurred attribution is where risk hides.
The Next Wave Will Be Harder
If today’s deployments feel complex, the next wave will be more so. We’re already seeing early forms of:
- Agent “teams” coordinating tasks
- Multi-step autonomous workflows
- Cross-application orchestration
- Agents invoking other agents
Agentic AI adoption is rapidly gaining momentum. According to a McKinsey report, The state of AI in 2025: Agents, innovation, and transformation, 23 percent of respondents are scaling agentic AI in their enterprise with an additional 39 percent now experimenting with AI agents. Each new agent introduces another layer of delegated execution. Each new integration expands the inherited access graph. This is not just a tooling shift. It’s an architectural one.
AI Agents Are a New Identity Class
For years, security teams have struggled with non-human identities: service accounts, API keys, workload credentials. AI agents introduce a new category. These agents aren’t static service accounts. They are agents that run with your privileges and can make decisions at machine speed.
If organizations don’t explicitly govern them as identities with least-privilege controls, clear attribution, and end-to-end auditability, you end up increasing access in ways you can’t clearly see or audit.
After protecting Claude Code deployments in both our environment and in our customers’ environments, one lesson stands out. AI agents are not just another application to monitor. They are actors in your environment. And every actor, human or non-human, needs an identity you can see, control, and audit.
Because in the age of AI agents, identity isn’t just part of the security strategy. It is the control plane.
Want to see how we provide an identity control plane for AI agents, let’s set up a demo.
.gif)






