A comprehensive guide to building self-orchestrating, parallel-executing, consensus-building agent workflows that actually work
Picture this: You're building a complex multi-agent workflow in Claude Code. You need agents to coordinate, pass information, and execute in sequence. You reach for the obvious solution - have one agent call another using a Task
tool.
Except... there is no Task tool. Subagents can't call other subagents. Each agent is isolated in its own context bubble.
This apparent limitation led to a breakthrough that fundamentally changes how we think about agent orchestration. Instead of centralized control, we discovered emergent orchestration through message passing - a pattern so powerful it enables everything from simple pipelines to complex parallel workflows with consensus building and continuous learning.
- The Core Discovery: Middleware Chaining
- Building Your First Chain
- Advanced Pattern: Parallel Execution
- The Synthesis Pattern (MapReduce for Agents)
- Consensus Building & Conflict Resolution
- Knowledge Extraction & Learning Loops
- Pattern Selection Guide
- Real-World Implementation
- Troubleshooting & Edge Cases
- The Future of Agent Orchestration
Claude Code's architecture has a fundamental constraint:
- ❌ Subagents cannot invoke other subagents
- ❌ No built-in orchestration tools
- ❌ Each agent runs in isolation
- ❌ No direct inter-agent communication
Instead of trying to control agents from above, we let each agent tell Claude what should happen next:
### NEXT_ACTION ###
Use the [next-agent] subagent with this prompt:
"[Instructions with full context]"
###
The main Claude Code agent sees this instruction and automatically executes it, creating a chain reaction that flows through your entire workflow.
- Main Claude maintains control - It's still the conductor, just following a score
- Agents become stateless functions - Input → Process → Output + Next Step
- Context travels through the chain - Like a baton in a relay race
- Workflows emerge from simple rules - Complex behavior from simple patterns
If you've worked with Express.js, this pattern will feel familiar:
// Express middleware
app.use((req, res, next) => {
req.processedData = processStep1(req.data);
next(); // Pass control to next middleware
});
// Claude Code agent "middleware"
Agent outputs:
processedData: [results]
### NEXT_ACTION ###
Use next-agent with context: [processedData]
###
Each agent is a middleware function that:
- Receives context (request)
- Processes its specific task
- Passes control to the next agent (next())
Let's build a simple 3-agent chain for code review:
---
name: code-analyzer
description: Analyzes code structure and complexity. First step in review pipeline.
tools: Read, Grep, Glob
---
You analyze code structure and identify areas of concern.
When analysis is complete, return:
### Analysis Results ###
[Your findings here]
### NEXT_ACTION ###
Use the security-scanner subagent with this prompt:
"Scan for security vulnerabilities in [files].
Context: [Include your analysis results]
Previous analyzer found: [key concerns]
After scanning, return NEXT_ACTION for performance-analyzer."
###
---
name: security-scanner
description: Scans for security vulnerabilities. Part of review pipeline.
tools: Read, Grep
---
You scan for security issues.
After scanning, return results and:
### NEXT_ACTION ###
Use the performance-analyzer subagent with this prompt:
"Analyze performance characteristics.
Context: [Combined context from previous agents]
After analysis, return NEXT_ACTION for review-synthesizer."
###
---
name: review-synthesizer
description: Synthesizes all review findings into actionable report.
tools: Write
---
You synthesize all findings into a comprehensive review.
Generate report and return:
### Review Complete ###
Report saved to: code-review-[timestamp].md
[No NEXT_ACTION - chain complete]
User input:
Use code-analyzer to review the changes in ./src
The chain automatically flows:
code-analyzer → security-scanner → performance-analyzer → review-synthesizer
Each agent receives accumulated context and adds its findings before passing control.
Sequential chains work well for simple workflows, but what if you need to analyze 100 files? Sequential processing would take forever.
Agents can spawn multiple parallel executions:
### NEXT_ACTIONS (PARALLEL) ###
Execute these simultaneously:
1. Use analyzer-alpha for ./src/components
2. Use analyzer-beta for ./src/services
3. Use analyzer-gamma for ./src/utils
After ALL complete:
Use synthesis-agent to merge all findings
###
---
name: parallel-orchestrator
description: Orchestrates parallel analysis workflows
tools: Glob, TodoWrite
---
When you need to analyze a large codebase:
1. Inventory the structure
2. Divide into logical chunks
3. Spawn parallel analyzers
### For Large Codebases (>50 files) ###
Return this pattern:
### NEXT_ACTIONS (PARALLEL) ###
Spawning parallel analyzers for faster processing:
1. Use code-explorer-1 for frontend components ([X] files)
2. Use code-explorer-2 for backend services ([Y] files)
3. Use code-explorer-3 for database layer ([Z] files)
4. Use code-explorer-4 for test suites ([W] files)
These will execute simultaneously.
After ALL complete:
Use technical-writer to synthesize findings into unified analysis.
Include all explorer results in context.
###
- Main Claude spawns all agents simultaneously - True parallel execution
- Each agent works independently - No inter-dependencies
- Synthesis point waits for all - Barrier synchronization
- Context merging at synthesis - All results combined
- Sequential (5 agents): ~25 minutes
- Parallel (5 agents): ~5 minutes
- Speedup: 5x
The gains are real and measurable!
Borrowed from distributed computing, this pattern uses:
- Map Phase: Parallel agents process chunks
- Reduce Phase: Synthesis agent combines results
[Input Data]
↓
[Chunk 1] [Chunk 2] [Chunk 3] ← Map
↓ ↓ ↓
[Agent 1] [Agent 2] [Agent 3] ← Parallel Process
↓ ↓ ↓
[Result 1][Result 2][Result 3] ← Individual Results
↘ ↓ ↙
[Synthesizer] ← Reduce
↓
[Final Output]
---
name: doc-processor-orchestrator
description: Orchestrates document processing with MapReduce pattern
tools: Glob, TodoWrite
---
For documentation consolidation:
### Phase 1: Map ###
Divide documents by type:
### NEXT_ACTIONS (PARALLEL MAP) ###
1. Use api-doc-processor for ./docs/api/*.md
2. Use guide-processor for ./docs/guides/*.md
3. Use tutorial-processor for ./docs/tutorials/*.md
4. Use reference-processor for ./docs/reference/*.md
After all complete:
Use doc-synthesizer for reduction phase.
###
The doc-synthesizer then:
### Phase 2: Reduce ###
- Merges all processed content
- Resolves cross-references
- Creates unified structure
- Generates master index
You can chain MapReduce operations:
Level 1: Process individual files (100 agents parallel)
↓
Level 1 Reduce: Create section summaries (10 synthesizers)
↓
Level 2: Process section summaries (10 agents parallel)
↓
Level 2 Reduce: Create final document (1 synthesizer)
This scales to massive workloads!
What happens when parallel agents disagree? Or when you need approval before destructive actions?
---
name: consensus-facilitator
description: Reviews proposals and builds consensus before execution
tools: Read, TodoWrite
---
You review proposals from multiple agents and identify conflicts.
Process:
1. Analyze all proposals
2. Identify conflicts or concerns
3. Determine resolution path
### For Minor Conflicts ###
Spawn resolver agents:
### NEXT_ACTION ###
Use conflict-resolver with these conflicts:
- Agent A proposes X, Agent B proposes Y
- Resolution needed: [specific issue]
After resolution, return to consensus-facilitator for re-review.
###
### For Major Conflicts ###
Escalate to human:
### HUMAN_INTERVENTION_REQUIRED ###
Cannot reach consensus on:
- [Critical decision point]
- [Conflicting recommendations]
Options:
1. [Agent A's proposal]
2. [Agent B's proposal]
3. [Compromise option]
Waiting for human decision...
###
### For Consensus Achieved ###
Proceed with execution:
### NEXT_ACTION ###
Use execution-coordinator with approved plan:
[Consensus plan details]
All agents agreed: ✅
###
Before deleting files in a consolidation workflow:
### NEXT_ACTION ###
Use deletion-validator with this prompt:
"Validate deletion safety for these files: [list]
Check:
1. Backups exist and are verified
2. Consolidated files contain all information
3. No active processes using these files
4. User permissions confirmed
If ANY safety check fails:
- DO NOT proceed with deletion
- Return specific concerns
- Suggest remediation
If all checks pass:
- Return approval with verification details
- Include NEXT_ACTION for file-deleter
"
###
- Unanimous Required: All agents must agree
- Majority Rules: >50% agreement proceeds
- Weighted Voting: Specialist agents have more weight
- Veto Power: Certain agents can block
- Human Escalation: When automated consensus fails
Workflows shouldn't just execute - they should learn and improve:
---
name: knowledge-extractor
description: Extracts patterns and learnings from completed workflows
tools: Read, Write, Glob
---
After workflow completion, you extract reusable knowledge.
Analyze:
1. What patterns emerged?
2. What worked well?
3. What failed and why?
4. What could be optimized?
Create/Update: .claude/knowledge/[workflow-type]-patterns.md
Include:
- Success patterns with metrics
- Failure patterns with causes
- Optimization opportunities
- Time/resource usage
- Recommended improvements
### For Future Workflows ###
This knowledge will be loaded by future orchestrators:
"Before starting, check .claude/knowledge/ for relevant patterns"
### NEXT_ACTION ###
Use knowledge-extractor with this prompt:
"Extract learnings from documentation consolidation:
Analyze:
- File patterns that consolidated well (>90% similarity)
- Patterns that required human intervention
- Average time per consolidation phase
- Memory/token usage patterns
- Error patterns and recovery success
Update: .claude/knowledge/consolidation-patterns.md
Structure as:
## Successful Patterns
- Pattern: README-v*.md files
Success Rate: 95%
Approach: [specific technique]
## Optimization Opportunities
- Finding: Parallel processing >20 files saves 70% time
- Implementation: [specific code]
This will help future consolidations run better."
###
Future workflows start with:
---
name: smart-consolidator
description: Consolidation orchestrator with learned optimizations
tools: Read, Glob, TodoWrite
---
STARTUP PROTOCOL:
1. Load knowledge base:
Read .claude/knowledge/consolidation-patterns.md
2. Apply learned optimizations:
- Use parallel for >20 files (learned threshold)
- Pre-identify README patterns (95% success rate)
- Allocate resources based on historical usage
3. Execute with improvements
Track and learn from:
- Execution time per phase
- Success/failure rates by pattern
- Resource usage (tokens, memory)
- Human intervention frequency
- Error recovery success rates
- Quality scores from validations
Start: How many operations needed?
│
├─ Single operation → Direct agent call
│
├─ 2-5 Sequential → Simple middleware chain
│
├─ 5+ Sequential → Chain with progress tracking
│
├─ Parallel possible?
│ ├─ Yes, simple → NEXT_ACTIONS (parallel)
│ └─ Yes, complex → MapReduce pattern
│
├─ Needs consensus?
│ ├─ Yes → Add consensus-facilitator
│ └─ No → Direct execution
│
└─ Repeated workflow?
├─ Yes → Add knowledge extraction
└─ No → One-time execution
Scenario | Patterns to Use |
---|---|
Simple pipeline | Middleware chain |
Large dataset processing | MapReduce + Parallel |
Critical operations | Consensus + Validation |
Repeated workflows | Knowledge extraction + Learning |
Complex orchestration | All patterns combined |
Simple Chain (3 agents):
- Setup time: ~30 seconds
- Execution: Sequential
- Complexity: Low
- Use when: Order matters, small dataset
Parallel Execution (10 agents):
- Setup time: ~1 minute
- Execution: Parallel (10x speedup)
- Complexity: Medium
- Use when: Independent operations, large dataset
MapReduce (50+ agents):
- Setup time: ~2 minutes
- Execution: Massive parallel
- Complexity: High
- Use when: Big data processing, complex aggregation
Full Orchestra (all patterns):
- Setup time: ~3 minutes
- Execution: Adaptive
- Complexity: Very high
- Use when: Mission-critical, learning system needed
Let's build a production-ready documentation consolidation system using all patterns:
---
name: doc-consolidation-orchestrator
description: Advanced orchestrator for documentation consolidation with parallel execution, consensus, and learning
tools: Glob, TodoWrite, Read
color: purple
---
# Documentation Consolidation Orchestrator
## Startup Protocol
### Step 1: Load Knowledge
Read .claude/knowledge/doc-patterns.md if exists
Apply learned optimizations
### Step 2: Assess Scope
Use Glob to inventory target directory
Determine optimal execution strategy:
- <10 files: Sequential
- 10-50 files: Parallel chunks
- >50 files: MapReduce pattern
### Step 3: Initialize Tracking
Create TodoWrite with phases:
1. ☐ Inventory & Planning
2. ☐ Backup Creation
3. ☐ Parallel Analysis
4. ☐ Consensus Building
5. ☐ Parallel Consolidation
6. ☐ Validation
7. ☐ Cleanup
8. ☐ Knowledge Extraction
### Step 4: Return Execution Plan
For large directories (>20 files):
### NEXT_ACTIONS (PARALLEL EXPLORATION) ###
Based on learned patterns, spawning optimized explorers:
1. Use readme-specialist for README* files (95% success pattern)
2. Use api-doc-explorer for */api/* paths
3. Use guide-explorer for */guides/* paths
4. Use general-explorer for remaining files
After ALL complete:
Use synthesis-coordinator to merge findings
Context to preserve:
{
"workflow_id": "doc-[timestamp]",
"file_count": N,
"learned_patterns_applied": ["readme-consolidation", "api-grouping"],
"optimization_mode": "parallel"
}
###
---
name: synthesis-coordinator
description: Merges parallel exploration results and coordinates consolidation
tools: Read, TodoWrite
---
You receive results from parallel explorers and create consolidation plan.
Process:
1. Merge all exploration results
2. Identify consolidation clusters
3. Calculate confidence scores
4. Plan execution strategy
### For High-Confidence Clusters (>90%) ###
### NEXT_ACTIONS (PARALLEL CONSOLIDATION) ###
High-confidence clusters for immediate processing:
1. Use consolidator-1 for cluster "README-variants" (5 files, 95% similarity)
2. Use consolidator-2 for cluster "installation-guides" (3 files, 92% similarity)
3. Use consolidator-3 for cluster "api-endpoints" (8 files, 91% similarity)
After ALL complete:
Use validation-coordinator to verify outputs
###
### For Medium-Confidence Clusters (70-90%) ###
### NEXT_ACTION ###
Use consensus-facilitator with these proposals:
[List of medium-confidence clusters]
Review and determine:
- Safe to consolidate automatically?
- Need human review?
- Need additional analysis?
###
### For Low-Confidence (<70%) ###
Flag for human review - do not consolidate automatically
---
name: consensus-facilitator
description: Reviews consolidation proposals and builds consensus
tools: TodoWrite
---
Review proposals and determine safe execution path.
### For Documentation Consolidation ###
Safety criteria:
1. No information loss risk
2. Clear topic alignment
3. No conflicting technical specifications
4. Version compatibility
If ALL criteria met:
→ Approve for automatic consolidation
If concerns exist:
→ Spawn specialist validators
If unresolvable:
→ Escalate to human with clear options
### NEXT_ACTION ###
[Based on consensus outcome]
---
name: knowledge-extractor
description: Extracts patterns for future improvement
tools: Read, Write
---
After workflow completion, extract learnings:
### Patterns to Capture ###
1. File naming patterns that indicate duplicates
2. Directory structures that suggest consolidation
3. Similarity thresholds that proved accurate
4. Time taken for each phase
5. Error patterns and recovery methods
### Update Knowledge Base ###
Append to .claude/knowledge/doc-patterns.md:
## Consolidation Run [timestamp]
- Files processed: N
- Reduction achieved: X%
- Patterns identified:
- Pattern: "README-*.md variants"
Success: 95%
Action: Auto-consolidate with high confidence
- Pattern: "version-specific docs"
Success: 60%
Action: Require human review
- Optimization discovered:
- Parallel threshold: 20 files (was 50)
- New clustering algorithm 30% more accurate
### NEXT_ACTION ###
Workflow complete. Knowledge preserved for future runs.
###
Symptom: Agent doesn't have information from previous steps
Solution: Always pass full context forward
### NEXT_ACTION ###
Use next-agent with this prompt:
"[Task description]
Complete context from previous agents:
[Include ALL accumulated context here]
"
###
Symptom: Synthesis agent runs before all parallel agents complete
Solution: Explicit barrier synchronization
### NEXT_ACTIONS (PARALLEL) ###
1. Agent-1 for task-1
2. Agent-2 for task-2
3. Agent-3 for task-3
CRITICAL: After ALL complete:
Use synthesis-agent to merge ALL results
Do not proceed until all three provide output
###
Symptom: Agents keep calling each other
Solution: Loop detection and limits
Context includes:
{
"chain_depth": 5,
"max_depth": 10,
"visited_agents": ["agent-1", "agent-2"]
}
If chain_depth >= max_depth:
ABORT: Maximum chain depth reached
Return final results without NEXT_ACTION
Symptom: "Agent not found" error
Solution: Graceful degradation
### NEXT_ACTION ###
Primary: Use specialized-agent for [task]
If specialized-agent not available:
Fallback: Use general-agent with additional instructions:
"Perform [task] with these specialized requirements: [details]"
###
Symptom: Agents can't agree, human unavailable
Solution: Timeout and safe defaults
After 3 consensus attempts:
- Document disagreement
- Choose safest option (usually: do nothing)
- Flag for later human review
- Continue workflow with non-controversial items
-
Batch Size for Parallel
- Optimal: 5-10 agents parallel
- Too many (>20) can cause coordination overhead
- Too few (<3) not worth the setup cost
-
Context Size Management
If context > 10KB: - Summarize non-critical parts - Use references instead of full content - Store details in temp files
-
Caching Patterns
Check before expensive operations: - If result exists in .cache/ - If timestamp < 1 hour - Use cached result
-
Resource Monitoring
Track in context: - token_usage: current/limit - execution_time: current - parallel_agents_active: count If approaching limits: - Switch to sequential - Reduce batch sizes - Simplify operations
On file_change in ./src:
Trigger: code-review-chain
On push to main:
Trigger: documentation-update-chain
On error in production:
Trigger: debugging-chain with context
Global agent registry:
- organization/security-scanner
- organization/performance-analyzer
- team-x/specialized-processor
Import and use:
### NEXT_ACTION ###
Use organization/security-scanner from global registry
###
---
name: composite-agent
extends: [base-analyzer, security-scanner, performance-checker]
tools: [inherited]
---
Combines capabilities of multiple agents into one
ML-powered routing:
- Learn optimal agent selection
- Predict execution times
- Auto-tune parameters
- Anomaly detection in workflows
Drag-and-drop interface → Generates agent chain code
Visual debugging → See context flow between agents
Real-time monitoring → Watch parallel execution
Performance analytics → Identify bottlenecks
- Start Simple: Build a 2-3 agent chain
- Add Complexity Gradually: Introduce parallel execution
- Measure Everything: Track what works
- Extract Knowledge: Build your pattern library
- Share & Collaborate: The community is discovering new patterns daily
-
Example Repositories:
- Basic chains:
github.com/examples/simple-chains
- Parallel patterns:
github.com/examples/parallel-agents
- Full orchestration:
github.com/examples/orchestration-suite
- Basic chains:
-
Community:
- Discord: Claude Code Orchestration
- Reddit: r/ClaudeCode
- Stack Overflow: [claude-code-orchestration]
The lack of a Task tool in Claude Code seemed like a limitation. Instead, it forced us to discover something more powerful: emergent orchestration through message passing.
This pattern is:
- More flexible than centralized orchestration
- More scalable through parallel execution
- More intelligent through consensus and learning
- More maintainable through clear separation of concerns
We're not just building agent chains - we're creating self-organizing systems that improve themselves over time.
The middleware pattern is just the beginning. As a community, we're discovering new patterns weekly. Some will become standard practice. Others will inspire the next generation of agent frameworks.
The question isn't "what can we build?" but "what can't we build?"
Welcome to the age of emergent agent orchestration. Let's build something amazing.
# Basic Chain
### NEXT_ACTION ###
Use next-agent with context
# Parallel Execution
### NEXT_ACTIONS (PARALLEL) ###
1. Use agent-1
2. Use agent-2
After ALL complete: Use synthesizer
# Conditional Routing
If condition:
NEXT_ACTION: Use agent-a
Else:
NEXT_ACTION: Use agent-b
# Human Escalation
### HUMAN_INTERVENTION_REQUIRED ###
[Clear options and context]
# Knowledge Extraction
### NEXT_ACTION ###
Use knowledge-extractor to capture patterns
# Context Passing
Context: {
workflow_id: "unique-id",
phase: "current-phase",
accumulated_data: {...},
chain_depth: N
}
# Error Handling
If error:
NEXT_ACTION: Use error-handler
Fallback: Use general-agent
Abort: Return results without NEXT_ACTION
Ready to orchestrate? Start with a simple chain, then add parallel execution, then consensus, then knowledge extraction. Before you know it, you'll have built a self-improving system that seemed impossible just yesterday.
The tools are ready. The patterns are proven. What will you orchestrate?