Skip to content

Instantly share code, notes, and snippets.

@decagondev
Created April 29, 2025 15:50
Show Gist options
  • Save decagondev/03dc85a9a4774754a6465ce23c9efbe7 to your computer and use it in GitHub Desktop.
Save decagondev/03dc85a9a4774754a6465ce23c9efbe7 to your computer and use it in GitHub Desktop.

In a LangGraph-based multi-agent setup, when a researcher agent produces an output, this output is passed into a supervisor agent. The supervisor uses the output to determine which edge of the graph to traverse next. This often involves wrapping the researcher's output in a structured message or passing it as part of a system prompt.


πŸ”„ How Context Propagation Works

  • Step 1: Researcher agent queries tools (e.g., Tavily) and returns a summary.
  • Step 2: This summary becomes the input context for the supervisor.
  • Step 3: The supervisor LLM is prompted with this context + a system prompt asking for decision routing.
  • Step 4: Based on reasoning, the supervisor picks the next step in the graph.

🌐 Graph Flow Example

graph TD
  A[User Question] --> B[Researcher Agent]
  B --> C[Tool: Tavily]
  C --> D[Research Summary]
  D --> E[Supervisor Agent]
  E --> F{Decision Node}
  F -- Next: Analysis --> G[Analyst Agent]
  F -- Next: Finalize --> H[Response Formatter]
Loading

🧠 Supervisor Prompt Construction

def build_supervisor_prompt(summary):
    return f"""
You are a task supervisor.
Given the following research result, decide the next agent to invoke:

---
{summary}
---

Choose between: ["analyst", "formatter"]
Respond with your choice only.
"""

Supervisor Agent

@traceable(name="supervisor")
def supervisor_router(research_summary):
    prompt = build_supervisor_prompt(research_summary)
    response = openai_client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "system", "content": prompt}]
    )
    decision = response.choices[0].message.content.strip().lower()
    return route_to_next(decision)

🧠 Using Graph Memory

Graph memory can be used to persist:

  • Prior agent outputs (e.g., multiple research passes)
  • Supervisor decisions
  • Tool call results

This enables long-term planning and context chaining.

from langgraph.graph import StateGraph

class Memory:
    def __init__(self):
        self.history = []

    def update(self, state):
        self.history.append(state)

    def get_context(self):
        return "\n".join(self.history)

memory = Memory()

Inject memory into agents:

context = memory.get_context()
full_prompt = f"Context so far:\n{context}\nNew input:\n{new_input}"

πŸ“‹ Multi-Step Plan Execution

You can construct multi-step execution plans based on the supervisor's strategy or user intent.

graph TD
  A[Initial Input] --> B[Planner Agent]
  B --> C[Step List: Research, Analyze, Format]
  C --> D[Supervisor]
  D --> E1[Researcher]
  E1 --> E2[Analyst]
  E2 --> E3[Formatter]
  E3 --> F[Final Output]
Loading

The planner can be an LLM that generates an action plan:

plan = ["research", "analyze", "format"]
for step in plan:
    result = run_agent(step, input_data)
    memory.update(f"{step}: {result}")
    input_data = result

🧭 Key Takeaway

The output from an agent is injected into the context of the next agent using structured system prompts. This allows LLMs (like OpenAI) to reason and control routing within the graph without needing hardcoded transitions.

With memory and multi-step plans, you gain dynamic planning and long-horizon reasoning capabilities in LangGraph.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment