Rewriting the Playbook: How Token Security Built an AI-Native Engineering System with Claude Code

Most startups use AI to move faster. Token Security used Claude to redesign how software gets built.
Over the past year, Token’s engineers delivered more than 1,000 production-grade integrations for its AI Agent Security platform, a level of coverage that would traditionally require a large, specialized engineering organization and years of effort. But the story isn’t about shipping integrations quickly, it is about what happens when a company treats AI as core infrastructure.
From Engineering Bottleneck to AI-Orchestrated System
Identity security has a fundamental integration problem. Enterprises now operate across hundreds, or even thousands, of SaaS tools, cloud services, developer platforms, AI systems, and more. Each system introduces non-human identities: service accounts, tokens, OAuth grants, API keys, and machine credentials.
To secure them, you need visibility everywhere. Traditionally, expanding coverage meant building and maintaining one connector at a time: researching APIs, writing custom logic, handling edge cases, testing, validating, and maintaining it all over time. It’s meticulous work, and it doesn’t scale easily.
Token Security decided to treat that bottleneck as a systems problem. Instead of asking, “How do we build integrations faster?” its engineers asked, “What would an AI-native integration factory look like?”
The answer became an internal system called Henry. But before Henry could be created, the foundation had to be in place.
The Context Graph: Building for the Age of Infinite Identities
Before Token Security built an AI-native engineering system, it re-architected its core platform. The company made a deliberate shift away from a traditional relational database model to what it calls a Context Graph, a graph-based architecture designed for an era where identities are no longer scarce, centralized objects, but part of sprawling, interconnected systems.
In human identity security, most activity flows through a small set of identity providers. But machine and AI agent identities are different. Tokens, service accounts, OAuth grants, API keys, and AI agents exist across thousands of systems, each with its own authentication model and permission structure. The surface area expands continuously.
In a relational database model, every new integration often requires schema expansion, with new tables, new joins, and new silos. As coverage grows, so does complexity, and relationships across systems become harder to reason about.
The transition to a graph architecture is one of the most difficult engineering shifts Token Security has undertaken. But, it unlocked something critical: infinite expandability from any node or edge. In a graph model, every identity, token, permission, owner, and resource becomes a vertex. Every access pathway becomes an edge. When a new integration is added, it doesn’t create another silo. It connects into an existing network.
The result is compounding context. Each new integration strengthens the graph rather than fragmenting it. This architectural decision is what made the AI-native engineering system possible. Henry could move quickly because the backend was designed to absorb complexity, instead of multiplying it.
In what Token calls the “age of infinite identities,” scale doesn’t just demand automation. It demands a data model that becomes more powerful as it grows. The Context Graph was the first rewrite of the playbook. The AI system was the second.
Henry: A Multi-Agent Engineering Manager
Henry isn’t a single model generating code. It’s a structured, multi-agent development system built on top of Claude Code that leverages precision context management to succeed. When a new integration is needed, Henry decomposes the work into stages:
- API research and documentation synthesis
- Requirements structuring
- Secure code generation
- Peer review and error detection
- Validation and QA, and
- Human approval before release
Claude acts as the reasoning layer across this pipeline. It reads fragmented API documentation, infers patterns, structures implementation plans, reviews outputs from other agents, and preserves architectural consistency across integrations.
The system behaves less like a script and more like an engineering organization. And, when tasks become large, the team structure becomes critical. Henry, acting as the top-level orchestrator, enables validation loops. Specifically, Henry:
- Creates agents and agent “team leads”
- Reviews what each agent is doing
- Replaces underperforming team lead agents
Crucially, it operates within governance boundaries: auditable outputs, human approval gates, version control, and rollback safeguards. Incentives and accountability within the AI agent team creates internal pressure for quality and validation. Quality and validation are inherently built into the system, which continuously reviews performance and adapts to improve.
Why Long-Context Reasoning Changed the Architecture
The breakthrough wasn’t simply automation. It was Claude’s ability to reason across large, messy contexts. But this shift isn’t conceptual; it’s operational. The way context is managed is now more structured and intentional. That is what allows Henry to generate what Token’s engineers need, without drifting, across long and messy contexts. Instead of micromanaging an agent with explicit instructions at every step, Henry creates structured roles and oversight. Agents observe each other, review output, and are accountable for quality.
Integration work requires synthesizing inconsistent API documentation, authentication models, permission schemas, edge-case behavior, and security implications. Claude’s long-context reasoning enabled Token to directly encode best practices into the pipeline. Over time, each integration didn’t just add coverage but strengthened the system. Instead of building 1,000 disconnected connectors, Token built a compounding integration engine.
Building AI Agents to Secure AI Agents
The milestone arrived as enterprises began deploying AI agents at scale internally. Each agent requires credentials, permissions, secrets, and access pathways. These machine and AI agent identities are rapidly becoming a new attack surface.
By operating its own AI agent ecosystem in production, Token gained firsthand insight into how AI agents authenticate, coordinate, escalate privileges, and interact with enterprise systems.
What an AI-Native Startup Looks Like
Token Security’s story isn’t about replacing engineers with AI. It’s about reorganizing engineering around reasoning systems. Instead of AI as an assistant, Token engineers built AI as coordinator, reviewer, and architectural memory.
Claude became part of the company’s operating system. The result was not just faster shipping. It was a new internal capability: the ability to systematically expand security coverage while encoding governance, auditability, and consistency directly into the pipeline.
For startups building in the age of AI, this shift may be the real inflection point. The companies that win won’t simply use models to write code. They’ll design organizations with agent architectures that think with them.
Want to see the Token Security AI Agent Security platform in action, request a demo today.
.gif)







