Think deeply about this infinite research task. You are about to orchestrate a sophisticated parallel multi-agent research system where each agent explores unique, non-overlapping angles of any topic.
Parse the following arguments from "$ARGUMENTS":
topic
- The research topic to investigatenum_agents
- Number of parallel research agents (3-10, default 5)max_iterations
- Number of iterations (1-N or "infinite", default "infinite")
Deep Understanding of Research Requirements:
- Topic: Extract and understand the core research topic
- Scope: Determine breadth and depth requirements
- Angles: Identify N distinct research perspectives where N = num_agents
- Evolution: Plan how research should deepen with each iteration
Agent Specialization Matrix: Based on num_agents, assign UNIQUE, NON-OVERLAPPING research angles:
For num_agents = 5 (default):
- ALPHA: Mainstream/Academic/Authoritative sources
- BETA: Contrarian/Critical/Alternative perspectives
- GAMMA: Technical/Implementation/Practical details
- DELTA: Historical/Evolutionary/Origins context
- EPSILON: Future/Predictions/Emerging trends
For num_agents = 3 (minimum):
- ALPHA: Current state & mainstream understanding
- BETA: Problems/Criticisms/Alternatives
- GAMMA: Future implications & applications
For num_agents = 10 (maximum):
Add: ZETA (Cross-domain), ETA (Ethics/Risk), THETA (Global), IOTA (Economic), KAPPA (Social)
Create Research Structure:
/deep-research/{sanitized_topic}/
├── .session # Agent coordination & search tracking
├── overview.md # Master research document
├── synthesis-{timestamp}.md # Periodic synthesis (every 3 iterations)
├── agent-alpha-research.md # Individual agent findings
├── agent-beta-research.md
├── agent-gamma-research.md
├── agent-delta-research.md
├── agent-epsilon-research.md
└── convergence-map.json # Tracks emerging themes across agents
Initialize .session file:
{
"topic": "{topic}",
"created": "{timestamp}",
"iteration": 0,
"agents": {
"alpha": {
"role": "Mainstream Explorer",
"searched_queries": [],
"fetched_urls": [],
"key_findings": [],
"avoided_terms": []
},
// ... other agents
},
"global_search_history": [],
"unique_domains_explored": [],
"convergence_topics": []
}
CRITICAL COORDINATION RULES:
- Zero Overlap Principle: Each agent MUST search completely different queries
- Domain Diversity: Agents should explore different types of sources
- Perspective Isolation: Each agent maintains a unique analytical lens
- Cross-Pollination: Agents read others' findings but explore orthogonal angles
Collision Avoidance Protocol:
def ensure_unique_search(agent_id, proposed_query):
# Check against ALL previous searches by ALL agents
# Use semantic similarity < 30% threshold
# If collision detected, modify query with agent-specific terms
# Maintain global_search_history in .session
For EACH iteration, deploy ALL agents SIMULTANEOUSLY:
=== ITERATION {N}: LAUNCHING {num_agents} PARALLEL RESEARCH AGENTS ===
# PREPARE AGENT CONTEXTS
for each agent in [alpha, beta, gamma, delta, epsilon...]:
context = {
"iteration": N,
"session_snapshot": read_session(),
"other_agents_searches": get_all_other_searches(agent),
"unique_angle": AGENT_STRATEGIES[agent],
"avoidance_list": compile_global_search_list()
}
# DEPLOY ALL AGENTS IN PARALLEL
parallel_tasks = []
for agent_id in active_agents:
task = create_research_task(agent_id, context)
parallel_tasks.append(deploy_sub_agent(task))
# PARALLEL EXECUTION
execute_all_tasks_simultaneously(parallel_tasks)
wait_for_all_completions()
update_global_session()
INDIVIDUAL AGENT TASK TEMPLATE:
TASK: Deep Research Agent {AGENT_ID} - {SPECIALIZATION} (Iteration {N})
You are Research Agent {AGENT_ID}, specialized in: {UNIQUE_FOCUS}
CRITICAL MISSION: Find information that NO OTHER AGENT will find.
COORDINATION DATA:
- Session file: /deep-research/{topic}/.session
- Other agents' searches: [COMPLETE LIST - YOU MUST AVOID ALL]
- Your previous searches: [LIST]
- Forbidden overlap threshold: 30% similarity
YOUR UNIQUE RESEARCH PROTOCOL:
1. DIVERGENT SEARCH STRATEGY:
- Read .session to see ALL previous searches
- Generate 5 COMPLETELY UNIQUE search queries
- Each query MUST be < 30% similar to any existing
- Use your specialization lens: {SPECIALIZATION_KEYWORDS}
- Avoid domains already heavily explored
2. SEARCH EXECUTION:
For each unique query:
a) web_search with your specialized query
b) Analyze results for YOUR SPECIFIC ANGLE
c) web_fetch 2-3 most promising URLs
d) Extract findings relevant to YOUR PERSPECTIVE
3. UNIQUE DISCOVERY DOCUMENTATION:
Write to: /deep-research/{topic}/agent-{agent_id}-research.md
## Iteration {N} - Agent {AGENT_ID} Findings
**Specialization:** {FOCUS}
**Unique Queries Used:**
1. "{query1}" - Why unique: {reasoning}
2. "{query2}" - Why unique: {reasoning}
### Exclusive Discoveries
[Findings that ONLY your specialization would uncover]
### Contrasting Perspectives
[How your findings challenge/complement other agents]
### Unexplored Territories
[Areas within your domain still requiring investigation]
4. SESSION UPDATE:
- Add all queries to .session
- Mark explored domains
- Flag breakthrough findings
- Update convergence topics if patterns emerge
REMEMBER: Your value is in finding what others DON'T find!
Wave-Based Parallel Execution:
iteration = 1
while should_continue(iteration, max_iterations):
# WAVE PLANNING
print(f"\n{'='*60}")
print(f"ITERATION {iteration}: Deploying {num_agents} Parallel Research Agents")
print(f"{'='*60}\n")
# PARALLEL DEPLOYMENT
agent_tasks = []
for agent_id in active_agents:
# Each agent gets fresh context with collision avoidance
task = create_unique_research_task(agent_id, iteration)
agent_tasks.append((agent_id, task))
# SIMULTANEOUS EXECUTION
print("🚀 LAUNCHING ALL AGENTS SIMULTANEOUSLY...")
for agent_id, task in agent_tasks:
print(f" ⚡ Agent {agent_id.upper()} - {STRATEGIES[agent_id]['name']}")
deploy_task(task)
# COORDINATION BARRIER
print("\n⏳ Waiting for all agents to complete...")
wait_for_all_agents_completion()
# CONVERGENCE ANALYSIS
analyze_convergence_patterns()
update_global_session()
# SYNTHESIS (Every 3 iterations)
if iteration % 3 == 0:
deploy_synthesis_agent(iteration)
# CONTEXT MONITORING
if get_context_usage() > 0.8:
print("\n📊 Approaching context limit - Final synthesis...")
deploy_final_synthesis()
break
# STAGNATION DETECTION
if detect_research_stagnation():
print("\n🔄 Injecting creativity boost...")
inject_wildcard_perspectives()
iteration += 1
Progressive Sophistication Strategy:
- Iterations 1-3: Broad exploration, establishing baselines
- Iterations 4-6: Deep dives into promising areas
- Iterations 7-9: Cross-domain synthesis and pattern recognition
- Iterations 10+: Revolutionary connections and breakthrough insights
Quality Through Diversity:
- Each parallel agent MUST maintain unique perspective
- Collision detection ensures zero overlap in searches
- Domain diversity prevents echo chamber effects
- Cross-pollination happens through reading, not searching
Coordination Excellence:
- All agents launch truly simultaneously
- Session file provides real-time coordination
- Global search history prevents any duplication
- Convergence tracking identifies emerging themes
Infinite Scalability:
- Wave-based execution manages context efficiently
- Progressive summarization maintains continuity
- Fresh agent instances prevent context accumulation
- Graceful conclusion when approaching limits
Before beginning execution, engage in extended analysis of:
Research Topology:
- How to partition the topic into N non-overlapping domains
- Optimal search query generation for maximum coverage
- Semantic distance calculations for collision avoidance
- Cross-domain insight synthesis strategies
Parallel Orchestration:
- Simultaneous agent deployment mechanisms
- Real-time coordination through session management
- Conflict resolution for file access
- Quality assurance across parallel streams
Infinite Optimization:
- Context usage prediction models
- Stagnation detection algorithms
- Creativity injection techniques
- Graceful termination strategies
Begin with deep contemplation of the research topology, then execute the parallel multi-agent research system with perfect coordination and zero overlap.