Scaling Managed Agents: Decoupling the brain from the hands
Anthropic Engineering Blog post describing the architecture of Managed Agents — a hosted service in the Claude Platform for long-horizon agent work. The core design principle: virtualize agent components (session, harness, sandbox) into stable interfaces that outlast any particular implementation, just as operating systems virtualized hardware into abstractions like process and file.
The article traces an evolution from a monolithic container (brain + hands + session in one process) to a decoupled architecture where each component is independently replaceable. The brain (Claude + harness) is stateless and calls the hands (sandboxes, tools) via execute(name, input) → string. The session is an append-only event log stored externally, interrogable via getEvents(). This makes every component cattle rather than a pet — failures are caught and recovered from automatically.
Key results: p50 TTFT dropped ~60% and p95 dropped >90% after decoupling, because containers are provisioned on demand rather than upfront. Security improves because credentials never enter the sandbox — Git tokens are wired at clone time, OAuth tokens live in a vault accessed through an MCP proxy. The architecture supports many brains connected to many hands, with brains able to pass hands to one another.
The article frames Managed Agents as a “meta-harness” — opinionated about interfaces but not about what runs behind them. This accommodates Claude Code, task-specific harnesses, and future harnesses as model capabilities evolve. A recurring theme is that harnesses encode assumptions about model limitations that go stale (e.g., “context anxiety” in Sonnet 4.5 disappeared in Opus 4.5), so the system is designed to swap harnesses without disturbing anything else.