Skip to content

Instantly share code, notes, and snippets.

@garyblankenship
Created August 20, 2025 14:03
Show Gist options
  • Save garyblankenship/17b9b68897af268b52ea77b4ef94e5c7 to your computer and use it in GitHub Desktop.
Save garyblankenship/17b9b68897af268b52ea77b4ef94e5c7 to your computer and use it in GitHub Desktop.
Advanced Claude Subagent Strategies #ai #claude

Advanced Agent Orchestration in Claude Code: From Middleware Chains to Parallel Synthesis

A comprehensive guide to building self-orchestrating, parallel-executing, consensus-building agent workflows that actually work


The Breakthrough Moment

Picture this: You're building a complex multi-agent workflow in Claude Code. You need agents to coordinate, pass information, and execute in sequence. You reach for the obvious solution - have one agent call another using a Task tool.

Except... there is no Task tool. Subagents can't call other subagents. Each agent is isolated in its own context bubble.

This apparent limitation led to a breakthrough that fundamentally changes how we think about agent orchestration. Instead of centralized control, we discovered emergent orchestration through message passing - a pattern so powerful it enables everything from simple pipelines to complex parallel workflows with consensus building and continuous learning.

Table of Contents

  1. The Core Discovery: Middleware Chaining
  2. Building Your First Chain
  3. Advanced Pattern: Parallel Execution
  4. The Synthesis Pattern (MapReduce for Agents)
  5. Consensus Building & Conflict Resolution
  6. Knowledge Extraction & Learning Loops
  7. Pattern Selection Guide
  8. Real-World Implementation
  9. Troubleshooting & Edge Cases
  10. The Future of Agent Orchestration

The Core Discovery: Middleware Chaining

The Problem

Claude Code's architecture has a fundamental constraint:

  • ❌ Subagents cannot invoke other subagents
  • ❌ No built-in orchestration tools
  • ❌ Each agent runs in isolation
  • ❌ No direct inter-agent communication

The Solution: NEXT_ACTION Instructions

Instead of trying to control agents from above, we let each agent tell Claude what should happen next:

### NEXT_ACTION ###
Use the [next-agent] subagent with this prompt:
"[Instructions with full context]"
###

The main Claude Code agent sees this instruction and automatically executes it, creating a chain reaction that flows through your entire workflow.

Why This Works

  1. Main Claude maintains control - It's still the conductor, just following a score
  2. Agents become stateless functions - Input → Process → Output + Next Step
  3. Context travels through the chain - Like a baton in a relay race
  4. Workflows emerge from simple rules - Complex behavior from simple patterns

The Middleware Analogy

If you've worked with Express.js, this pattern will feel familiar:

// Express middleware
app.use((req, res, next) => {
  req.processedData = processStep1(req.data);
  next(); // Pass control to next middleware
});

// Claude Code agent "middleware"
Agent outputs:
  processedData: [results]
  ### NEXT_ACTION ###
  Use next-agent with context: [processedData]
  ###

Each agent is a middleware function that:

  1. Receives context (request)
  2. Processes its specific task
  3. Passes control to the next agent (next())

Building Your First Chain

Let's build a simple 3-agent chain for code review:

Agent 1: Code Analyzer

---
name: code-analyzer
description: Analyzes code structure and complexity. First step in review pipeline.
tools: Read, Grep, Glob
---

You analyze code structure and identify areas of concern.

When analysis is complete, return:

### Analysis Results ###
[Your findings here]

### NEXT_ACTION ###
Use the security-scanner subagent with this prompt:
"Scan for security vulnerabilities in [files].
Context: [Include your analysis results]
Previous analyzer found: [key concerns]

After scanning, return NEXT_ACTION for performance-analyzer."
###

Agent 2: Security Scanner

---
name: security-scanner
description: Scans for security vulnerabilities. Part of review pipeline.
tools: Read, Grep
---

You scan for security issues.

After scanning, return results and:

### NEXT_ACTION ###
Use the performance-analyzer subagent with this prompt:
"Analyze performance characteristics.
Context: [Combined context from previous agents]

After analysis, return NEXT_ACTION for review-synthesizer."
###

Agent 3: Review Synthesizer

---
name: review-synthesizer
description: Synthesizes all review findings into actionable report.
tools: Write
---

You synthesize all findings into a comprehensive review.

Generate report and return:

### Review Complete ###
Report saved to: code-review-[timestamp].md
[No NEXT_ACTION - chain complete]

Starting the Chain

User input:

Use code-analyzer to review the changes in ./src

The chain automatically flows:

code-analyzer → security-scanner → performance-analyzer → review-synthesizer

Each agent receives accumulated context and adds its findings before passing control.


Advanced Pattern: Parallel Execution

The Challenge

Sequential chains work well for simple workflows, but what if you need to analyze 100 files? Sequential processing would take forever.

The Solution: NEXT_ACTIONS (Plural)

Agents can spawn multiple parallel executions:

### NEXT_ACTIONS (PARALLEL) ###
Execute these simultaneously:
1. Use analyzer-alpha for ./src/components
2. Use analyzer-beta for ./src/services  
3. Use analyzer-gamma for ./src/utils

After ALL complete:
Use synthesis-agent to merge all findings
###

Implementation Example

---
name: parallel-orchestrator
description: Orchestrates parallel analysis workflows
tools: Glob, TodoWrite
---

When you need to analyze a large codebase:

1. Inventory the structure
2. Divide into logical chunks
3. Spawn parallel analyzers

### For Large Codebases (>50 files) ###

Return this pattern:

### NEXT_ACTIONS (PARALLEL) ###
Spawning parallel analyzers for faster processing:
1. Use code-explorer-1 for frontend components ([X] files)
2. Use code-explorer-2 for backend services ([Y] files)
3. Use code-explorer-3 for database layer ([Z] files)
4. Use code-explorer-4 for test suites ([W] files)

These will execute simultaneously.

After ALL complete:
Use technical-writer to synthesize findings into unified analysis.
Include all explorer results in context.
###

Parallel Execution Rules

  1. Main Claude spawns all agents simultaneously - True parallel execution
  2. Each agent works independently - No inter-dependencies
  3. Synthesis point waits for all - Barrier synchronization
  4. Context merging at synthesis - All results combined

Performance Gains

  • Sequential (5 agents): ~25 minutes
  • Parallel (5 agents): ~5 minutes
  • Speedup: 5x

The gains are real and measurable!


The Synthesis Pattern (MapReduce for Agents)

The Concept

Borrowed from distributed computing, this pattern uses:

  • Map Phase: Parallel agents process chunks
  • Reduce Phase: Synthesis agent combines results
        [Input Data]
             ↓
    [Chunk 1] [Chunk 2] [Chunk 3]  ← Map
         ↓        ↓         ↓
    [Agent 1] [Agent 2] [Agent 3]   ← Parallel Process
         ↓        ↓         ↓
    [Result 1][Result 2][Result 3]  ← Individual Results
         ↘       ↓        ↙
           [Synthesizer]             ← Reduce
                ↓
          [Final Output]

Real Implementation: Documentation Processor

---
name: doc-processor-orchestrator
description: Orchestrates document processing with MapReduce pattern
tools: Glob, TodoWrite
---

For documentation consolidation:

### Phase 1: Map ###
Divide documents by type:

### NEXT_ACTIONS (PARALLEL MAP) ###
1. Use api-doc-processor for ./docs/api/*.md
2. Use guide-processor for ./docs/guides/*.md
3. Use tutorial-processor for ./docs/tutorials/*.md
4. Use reference-processor for ./docs/reference/*.md

After all complete:
Use doc-synthesizer for reduction phase.
###

The doc-synthesizer then:

### Phase 2: Reduce ###
- Merges all processed content
- Resolves cross-references
- Creates unified structure
- Generates master index

Multi-Level MapReduce

You can chain MapReduce operations:

Level 1: Process individual files (100 agents parallel)
    ↓
Level 1 Reduce: Create section summaries (10 synthesizers)
    ↓
Level 2: Process section summaries (10 agents parallel)
    ↓
Level 2 Reduce: Create final document (1 synthesizer)

This scales to massive workloads!


Consensus Building & Conflict Resolution

The Problem

What happens when parallel agents disagree? Or when you need approval before destructive actions?

The Consensus Pattern

---
name: consensus-facilitator
description: Reviews proposals and builds consensus before execution
tools: Read, TodoWrite
---

You review proposals from multiple agents and identify conflicts.

Process:
1. Analyze all proposals
2. Identify conflicts or concerns
3. Determine resolution path

### For Minor Conflicts ###
Spawn resolver agents:

### NEXT_ACTION ###
Use conflict-resolver with these conflicts:
- Agent A proposes X, Agent B proposes Y
- Resolution needed: [specific issue]

After resolution, return to consensus-facilitator for re-review.
###

### For Major Conflicts ###
Escalate to human:

### HUMAN_INTERVENTION_REQUIRED ###
Cannot reach consensus on:
- [Critical decision point]
- [Conflicting recommendations]

Options:
1. [Agent A's proposal]
2. [Agent B's proposal]
3. [Compromise option]

Waiting for human decision...
###

### For Consensus Achieved ###
Proceed with execution:

### NEXT_ACTION ###
Use execution-coordinator with approved plan:
[Consensus plan details]
All agents agreed: 
###

Practical Example: Deletion Safety

Before deleting files in a consolidation workflow:

### NEXT_ACTION ###
Use deletion-validator with this prompt:
"Validate deletion safety for these files: [list]

Check:
1. Backups exist and are verified
2. Consolidated files contain all information
3. No active processes using these files
4. User permissions confirmed

If ANY safety check fails:
- DO NOT proceed with deletion
- Return specific concerns
- Suggest remediation

If all checks pass:
- Return approval with verification details
- Include NEXT_ACTION for file-deleter
"
###

Consensus Strategies

  1. Unanimous Required: All agents must agree
  2. Majority Rules: >50% agreement proceeds
  3. Weighted Voting: Specialist agents have more weight
  4. Veto Power: Certain agents can block
  5. Human Escalation: When automated consensus fails

Knowledge Extraction & Learning Loops

The Concept

Workflows shouldn't just execute - they should learn and improve:

---
name: knowledge-extractor
description: Extracts patterns and learnings from completed workflows
tools: Read, Write, Glob
---

After workflow completion, you extract reusable knowledge.

Analyze:
1. What patterns emerged?
2. What worked well?
3. What failed and why?
4. What could be optimized?

Create/Update: .claude/knowledge/[workflow-type]-patterns.md

Include:
- Success patterns with metrics
- Failure patterns with causes
- Optimization opportunities
- Time/resource usage
- Recommended improvements

### For Future Workflows ###
This knowledge will be loaded by future orchestrators:

"Before starting, check .claude/knowledge/ for relevant patterns"

Implementation: Self-Improving Consolidation

### NEXT_ACTION ###
Use knowledge-extractor with this prompt:
"Extract learnings from documentation consolidation:

Analyze:
- File patterns that consolidated well (>90% similarity)
- Patterns that required human intervention
- Average time per consolidation phase
- Memory/token usage patterns
- Error patterns and recovery success

Update: .claude/knowledge/consolidation-patterns.md

Structure as:
## Successful Patterns
- Pattern: README-v*.md files
  Success Rate: 95%
  Approach: [specific technique]
  
## Optimization Opportunities
- Finding: Parallel processing >20 files saves 70% time
- Implementation: [specific code]

This will help future consolidations run better."
###

Knowledge Application

Future workflows start with:

---
name: smart-consolidator
description: Consolidation orchestrator with learned optimizations
tools: Read, Glob, TodoWrite
---

STARTUP PROTOCOL:
1. Load knowledge base:
   Read .claude/knowledge/consolidation-patterns.md
   
2. Apply learned optimizations:
   - Use parallel for >20 files (learned threshold)
   - Pre-identify README patterns (95% success rate)
   - Allocate resources based on historical usage
   
3. Execute with improvements

Metrics That Matter

Track and learn from:

  • Execution time per phase
  • Success/failure rates by pattern
  • Resource usage (tokens, memory)
  • Human intervention frequency
  • Error recovery success rates
  • Quality scores from validations

Pattern Selection Guide

Decision Tree for Orchestration Patterns

Start: How many operations needed?
    │
    ├─ Single operation → Direct agent call
    │
    ├─ 2-5 Sequential → Simple middleware chain
    │
    ├─ 5+ Sequential → Chain with progress tracking
    │
    ├─ Parallel possible?
    │   ├─ Yes, simple → NEXT_ACTIONS (parallel)
    │   └─ Yes, complex → MapReduce pattern
    │
    ├─ Needs consensus?
    │   ├─ Yes → Add consensus-facilitator
    │   └─ No → Direct execution
    │
    └─ Repeated workflow?
        ├─ Yes → Add knowledge extraction
        └─ No → One-time execution

Pattern Combinations

Scenario Patterns to Use
Simple pipeline Middleware chain
Large dataset processing MapReduce + Parallel
Critical operations Consensus + Validation
Repeated workflows Knowledge extraction + Learning
Complex orchestration All patterns combined

Complexity vs. Performance Trade-offs

Simple Chain (3 agents):
- Setup time: ~30 seconds
- Execution: Sequential
- Complexity: Low
- Use when: Order matters, small dataset

Parallel Execution (10 agents):
- Setup time: ~1 minute
- Execution: Parallel (10x speedup)
- Complexity: Medium
- Use when: Independent operations, large dataset

MapReduce (50+ agents):
- Setup time: ~2 minutes
- Execution: Massive parallel
- Complexity: High
- Use when: Big data processing, complex aggregation

Full Orchestra (all patterns):
- Setup time: ~3 minutes
- Execution: Adaptive
- Complexity: Very high
- Use when: Mission-critical, learning system needed

Real-World Implementation

Complete Example: Documentation System

Let's build a production-ready documentation consolidation system using all patterns:

1. The Orchestrator

---
name: doc-consolidation-orchestrator
description: Advanced orchestrator for documentation consolidation with parallel execution, consensus, and learning
tools: Glob, TodoWrite, Read
color: purple
---

# Documentation Consolidation Orchestrator

## Startup Protocol

### Step 1: Load Knowledge
Read .claude/knowledge/doc-patterns.md if exists
Apply learned optimizations

### Step 2: Assess Scope
Use Glob to inventory target directory
Determine optimal execution strategy:
- <10 files: Sequential
- 10-50 files: Parallel chunks
- >50 files: MapReduce pattern

### Step 3: Initialize Tracking
Create TodoWrite with phases:
1. ☐ Inventory & Planning
2. ☐ Backup Creation
3. ☐ Parallel Analysis
4. ☐ Consensus Building
5. ☐ Parallel Consolidation
6. ☐ Validation
7. ☐ Cleanup
8. ☐ Knowledge Extraction

### Step 4: Return Execution Plan

For large directories (>20 files):

### NEXT_ACTIONS (PARALLEL EXPLORATION) ###
Based on learned patterns, spawning optimized explorers:
1. Use readme-specialist for README* files (95% success pattern)
2. Use api-doc-explorer for */api/* paths
3. Use guide-explorer for */guides/* paths
4. Use general-explorer for remaining files

After ALL complete:
Use synthesis-coordinator to merge findings

Context to preserve:
{
  "workflow_id": "doc-[timestamp]",
  "file_count": N,
  "learned_patterns_applied": ["readme-consolidation", "api-grouping"],
  "optimization_mode": "parallel"
}
###

2. The Synthesis Coordinator

---
name: synthesis-coordinator
description: Merges parallel exploration results and coordinates consolidation
tools: Read, TodoWrite
---

You receive results from parallel explorers and create consolidation plan.

Process:
1. Merge all exploration results
2. Identify consolidation clusters
3. Calculate confidence scores
4. Plan execution strategy

### For High-Confidence Clusters (>90%) ###

### NEXT_ACTIONS (PARALLEL CONSOLIDATION) ###
High-confidence clusters for immediate processing:
1. Use consolidator-1 for cluster "README-variants" (5 files, 95% similarity)
2. Use consolidator-2 for cluster "installation-guides" (3 files, 92% similarity)
3. Use consolidator-3 for cluster "api-endpoints" (8 files, 91% similarity)

After ALL complete:
Use validation-coordinator to verify outputs
###

### For Medium-Confidence Clusters (70-90%) ###

### NEXT_ACTION ###
Use consensus-facilitator with these proposals:
[List of medium-confidence clusters]

Review and determine:
- Safe to consolidate automatically?
- Need human review?
- Need additional analysis?
###

### For Low-Confidence (<70%) ###
Flag for human review - do not consolidate automatically

3. The Consensus Facilitator

---
name: consensus-facilitator
description: Reviews consolidation proposals and builds consensus
tools: TodoWrite
---

Review proposals and determine safe execution path.

### For Documentation Consolidation ###

Safety criteria:
1. No information loss risk
2. Clear topic alignment
3. No conflicting technical specifications
4. Version compatibility

If ALL criteria met:
→ Approve for automatic consolidation

If concerns exist:
→ Spawn specialist validators

If unresolvable:
→ Escalate to human with clear options

### NEXT_ACTION ###
[Based on consensus outcome]

4. The Knowledge Extractor

---
name: knowledge-extractor
description: Extracts patterns for future improvement
tools: Read, Write
---

After workflow completion, extract learnings:

### Patterns to Capture ###
1. File naming patterns that indicate duplicates
2. Directory structures that suggest consolidation
3. Similarity thresholds that proved accurate
4. Time taken for each phase
5. Error patterns and recovery methods

### Update Knowledge Base ###

Append to .claude/knowledge/doc-patterns.md:

## Consolidation Run [timestamp]
- Files processed: N
- Reduction achieved: X%
- Patterns identified:
  - Pattern: "README-*.md variants"
    Success: 95%
    Action: Auto-consolidate with high confidence
  - Pattern: "version-specific docs"
    Success: 60%
    Action: Require human review
- Optimization discovered:
  - Parallel threshold: 20 files (was 50)
  - New clustering algorithm 30% more accurate

### NEXT_ACTION ###
Workflow complete. Knowledge preserved for future runs.
###

Troubleshooting & Edge Cases

Common Issues and Solutions

Issue: Lost Context in Chain

Symptom: Agent doesn't have information from previous steps

Solution: Always pass full context forward

### NEXT_ACTION ###
Use next-agent with this prompt:
"[Task description]
Complete context from previous agents:
[Include ALL accumulated context here]
"
###

Issue: Parallel Agents Not Synchronized

Symptom: Synthesis agent runs before all parallel agents complete

Solution: Explicit barrier synchronization

### NEXT_ACTIONS (PARALLEL) ###
1. Agent-1 for task-1
2. Agent-2 for task-2
3. Agent-3 for task-3

CRITICAL: After ALL complete:
Use synthesis-agent to merge ALL results
Do not proceed until all three provide output
###

Issue: Infinite Loops

Symptom: Agents keep calling each other

Solution: Loop detection and limits

Context includes:
{
  "chain_depth": 5,
  "max_depth": 10,
  "visited_agents": ["agent-1", "agent-2"]
}

If chain_depth >= max_depth:
  ABORT: Maximum chain depth reached
  Return final results without NEXT_ACTION

Issue: Specialist Agent Not Found

Symptom: "Agent not found" error

Solution: Graceful degradation

### NEXT_ACTION ###
Primary: Use specialized-agent for [task]

If specialized-agent not available:
Fallback: Use general-agent with additional instructions:
"Perform [task] with these specialized requirements: [details]"
###

Issue: Consensus Deadlock

Symptom: Agents can't agree, human unavailable

Solution: Timeout and safe defaults

After 3 consensus attempts:
- Document disagreement
- Choose safest option (usually: do nothing)
- Flag for later human review
- Continue workflow with non-controversial items

Performance Optimization Tips

  1. Batch Size for Parallel

    • Optimal: 5-10 agents parallel
    • Too many (>20) can cause coordination overhead
    • Too few (<3) not worth the setup cost
  2. Context Size Management

    If context > 10KB:
      - Summarize non-critical parts
      - Use references instead of full content
      - Store details in temp files
  3. Caching Patterns

    Check before expensive operations:
    - If result exists in .cache/
    - If timestamp < 1 hour
    - Use cached result
  4. Resource Monitoring

    Track in context:
    - token_usage: current/limit
    - execution_time: current
    - parallel_agents_active: count
    
    If approaching limits:
    - Switch to sequential
    - Reduce batch sizes
    - Simplify operations

The Future of Agent Orchestration

What's Next?

Event-Driven Agents

On file_change in ./src:
  Trigger: code-review-chain
  
On push to main:
  Trigger: documentation-update-chain
  
On error in production:
  Trigger: debugging-chain with context

Cross-Project Agent Networks

Global agent registry:
- organization/security-scanner
- organization/performance-analyzer
- team-x/specialized-processor

Import and use:
### NEXT_ACTION ###
Use organization/security-scanner from global registry
###

Agent Composition Patterns

---
name: composite-agent
extends: [base-analyzer, security-scanner, performance-checker]
tools: [inherited]
---

Combines capabilities of multiple agents into one

Machine Learning Integration

ML-powered routing:
- Learn optimal agent selection
- Predict execution times
- Auto-tune parameters
- Anomaly detection in workflows

Visual Workflow Builders

Drag-and-drop interface → Generates agent chain code
Visual debugging → See context flow between agents
Real-time monitoring → Watch parallel execution
Performance analytics → Identify bottlenecks

Getting Started Today

  1. Start Simple: Build a 2-3 agent chain
  2. Add Complexity Gradually: Introduce parallel execution
  3. Measure Everything: Track what works
  4. Extract Knowledge: Build your pattern library
  5. Share & Collaborate: The community is discovering new patterns daily

Resources

  • Example Repositories:

    • Basic chains: github.com/examples/simple-chains
    • Parallel patterns: github.com/examples/parallel-agents
    • Full orchestration: github.com/examples/orchestration-suite
  • Community:

    • Discord: Claude Code Orchestration
    • Reddit: r/ClaudeCode
    • Stack Overflow: [claude-code-orchestration]

Conclusion: The Power of Constraints

The lack of a Task tool in Claude Code seemed like a limitation. Instead, it forced us to discover something more powerful: emergent orchestration through message passing.

This pattern is:

  • More flexible than centralized orchestration
  • More scalable through parallel execution
  • More intelligent through consensus and learning
  • More maintainable through clear separation of concerns

We're not just building agent chains - we're creating self-organizing systems that improve themselves over time.

The middleware pattern is just the beginning. As a community, we're discovering new patterns weekly. Some will become standard practice. Others will inspire the next generation of agent frameworks.

The question isn't "what can we build?" but "what can't we build?"

Welcome to the age of emergent agent orchestration. Let's build something amazing.


Quick Reference Card

# Basic Chain
### NEXT_ACTION ###
Use next-agent with context

# Parallel Execution  
### NEXT_ACTIONS (PARALLEL) ###
1. Use agent-1
2. Use agent-2
After ALL complete: Use synthesizer

# Conditional Routing
If condition:
  NEXT_ACTION: Use agent-a
Else:
  NEXT_ACTION: Use agent-b

# Human Escalation
### HUMAN_INTERVENTION_REQUIRED ###
[Clear options and context]

# Knowledge Extraction
### NEXT_ACTION ###
Use knowledge-extractor to capture patterns

# Context Passing
Context: {
  workflow_id: "unique-id",
  phase: "current-phase",
  accumulated_data: {...},
  chain_depth: N
}

# Error Handling
If error:
  NEXT_ACTION: Use error-handler
  Fallback: Use general-agent
  Abort: Return results without NEXT_ACTION

Ready to orchestrate? Start with a simple chain, then add parallel execution, then consensus, then knowledge extraction. Before you know it, you'll have built a self-improving system that seemed impossible just yesterday.

The tools are ready. The patterns are proven. What will you orchestrate?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment