Agentic Workflow Patterns
A taxonomy of five composable patterns for building agentic systems with LLMs, ordered by increasing complexity. Introduced in Building Effective Agents by Anthropic.
The core principle: start with the simplest pattern that solves your problem. Add complexity only when it demonstrably improves outcomes.
The five patterns
1. Prompt chaining
A task decomposed into sequential steps. Each LLM call processes the output of the previous one. Programmatic gates between steps verify intermediate results.
Tradeoff: latency for accuracy — each call is an easier task.
Use when: the task cleanly decomposes into fixed subtasks.
Examples: generate then translate; outline then validate then write.
2. Routing
An input is classified and directed to a specialized handler. Enables separation of concerns — each handler has a focused prompt optimized for its category.
Use when: distinct input categories benefit from different treatment, and classification is reliable.
Examples: customer service queries routed by type; easy questions to small models, hard ones to capable models.
3. Parallelization
Multiple LLM calls run simultaneously, outputs aggregated programmatically. Two variants:
- Sectioning — independent subtasks in parallel (e.g., guardrails model + response model)
- Voting — same task run multiple times for confidence (e.g., multiple code vulnerability reviewers)
Use when: subtasks are independent, or multiple perspectives improve confidence.
4. Orchestrator-workers
A central LLM dynamically breaks down the task, delegates to worker LLMs, synthesizes results. Unlike parallelization, subtasks are not predefined — the orchestrator determines them based on the input. See Orchestrator-Workers Pattern.
Use when: subtask structure is unpredictable (e.g., which files to edit depends on the coding task).
5. Evaluator-optimizer
A generator LLM produces output; an evaluator LLM provides feedback. They iterate in a loop until quality criteria are met. See Evaluator-Optimizer Pattern.
Use when: clear evaluation criteria exist and human-like feedback demonstrably improves output.
Examples: literary translation with evaluator critiques; multi-round search with an evaluator deciding when to stop.
Combining patterns
These patterns are composable, not prescriptive. A production system might chain a routing step into parallel workers, each running an evaluator-optimizer loop. The key is measuring performance and iterating.
Relationship to agents
Workflows use predefined code paths. True agents dynamically direct their own processes. The five patterns above are all workflows. An agent is an LLM in a tool-use loop with no predefined path — it decides what to do next based on environmental feedback.
The progression: single LLM call -> workflow patterns -> autonomous agent. Most applications should stop at the first level that works.
Connections
- Augmented LLM is the building block that all five patterns compose
- The orchestrator-workers pattern parallels the Brain-Hands Decoupling in Managed Agents — one brain, many hands
- The evaluator-optimizer loop is structurally the same as the Agent Learning Loop — generate, evaluate, refine
- CORAL implements a sophisticated version of orchestrator-workers with shared memory and asynchronous co-evolution
- Hermes Agent’s skill creation after complex tasks resembles the evaluator-optimizer: generate a solution, evaluate it, refine into a reusable skill