Skip to content

Instantly share code, notes, and snippets.

@decagondev
Created June 30, 2025 15:56
Show Gist options
  • Save decagondev/2d52c80f7469f64175c027ac4731008a to your computer and use it in GitHub Desktop.
Save decagondev/2d52c80f7469f64175c027ac4731008a to your computer and use it in GitHub Desktop.

Human-in-the-Loop in LangGraph

A Human-in-the-Loop Node is Possible

LangGraph supports the inclusion of a human-in-the-loop node to evaluate intermediate results or guide the workflow. This is a powerful feature for scenarios where human judgment is needed, such as reviewing outputs, providing feedback, or making decisions in a multi-agent system. Below, I explain how this can be implemented in the context of the provided notebook and LangGraph’s capabilities, addressing its alignment with the class materials.

How Human-in-the-Loop Fits in LangGraph

  • LangGraph’s Flexibility: LangGraph’s nodes are functions that process inputs and update the state (slides: “Node: a function that… processes [inputs] and returns results”). These functions can be LLMs, tools, or custom logic, including human interactions. A human-in-the-loop node can prompt a user for input, evaluate results, or approve/reject outputs, integrating human feedback into the graph’s state.
  • State Management: The state (e.g., AgentState in the notebook, with messages and sender) can store human inputs (e.g., as HumanMessage objects) and pass them to subsequent nodes, ensuring the workflow remains dynamic and context-aware (slides: “State is a dictionary of relative information… updated as the graph is executed”).
  • Conditional Edges: The notebook’s router function uses conditional edges to direct workflow based on state (e.g., tool calls or “FINAL ANSWER”). A human-in-the-loop node can update the state with human feedback, and conditional edges can route to different nodes based on that feedback (e.g., revise or finalize).

Implementation in the Notebook

The notebook demonstrates a multi-agent graph with a Researcher and Chart Generator, but it can be extended to include a human-in-the-loop node. Here’s how:

  • Node Definition:
    • Add a node (e.g., human_review_node) that prompts the user to evaluate the current state (e.g., review the Researcher’s data or the Chart Generator’s output).
    • The node can use a function to collect human input (e.g., via input() in Python or an external interface) and update the state with the feedback.
  • Example Code:
    def human_review_node(state):
        # Get the latest output (e.g., research data or chart code)
        last_message = state["messages"][-1]
        print(f"Current output: {last_message.content}")
        human_feedback = input("Please review and provide feedback (e.g., 'approve', 'revise', or 'add more data'): ")
        # Update state with human feedback
        return {
            "messages": [HumanMessage(content=human_feedback, name="Human")],
            "sender": "Human"
        }
  • Graph Integration:
    • Add the node to the graph:
      workflow.add_node("human_review", human_review_node)
    • Add conditional edges to route based on human feedback:
      def human_router(state) -> Literal["Researcher", "chart_generator", "__end__"]:
          last_message = state["messages"][-1]
          if last_message.content.lower() == "approve":
              return "__end__"
          elif last_message.content.lower() == "revise":
              return "chart_generator"
          elif last_message.content.lower() == "add more data":
              return "Researcher"
          return "continue"
      
      workflow.add_conditional_edges(
          "human_review",
          human_router,
          {"Researcher": "Researcher", "chart_generator": "chart_generator", "__end__": END}
      )
    • Example Workflow: After the Researcher fetches GDP data or the chart_generator produces a chart, the human_review node prompts the user to approve, request revisions, or ask for more data, updating the state and routing accordingly.

Alignment with Class Materials

  • Multi-Agent Collaboration: The notebook and slides emphasize multi-agent graphs (slides: “Multi-agent graph concept”). A human-in-the-loop node acts as an additional “agent” providing feedback, fitting the collaborative model inspired by the AutoGen paper.
  • Supervisor Concept: The slides’ “supervisor concept” can be extended to include human oversight. The human node can act as a supervisor, evaluating outputs and directing the workflow (e.g., looping back to the Researcher for more data).
  • Iterative Workflows: The class objective (“Code an AI graph that writes a report, provides feedback, and rewrites the report n number of times”) aligns with human-in-the-loop, as human feedback can replace or complement automated feedback in iterative processes.
  • State Persistence: The notebook’s AgentState (a list of messages and sender) supports human inputs as HumanMessage objects, ensuring seamless integration with the graph’s state (slides: “State is passed to the next node”).

Practical Considerations

  • Interactivity: Human-in-the-loop nodes require a mechanism to collect input (e.g., command-line input(), a web interface, or API). For the notebook, a simple input() works in a Jupyter environment, but production systems might need a more robust UI.
  • State Updates: Ensure the human node updates the state consistently (e.g., adding a HumanMessage with clear feedback) to avoid breaking downstream nodes.
  • Termination: The notebook uses “FINAL ANSWER” to end the workflow. Human feedback can trigger this (e.g., “approve” routes to END) or request further iterations, aligning with the class’s iterative objective.
  • Debugging: Use LangGraph’s tracing (README: LANGCHAIN_TRACING_V2) to monitor how human inputs affect the workflow, ensuring the graph behaves as expected.

Example Use Case

In the notebook’s context (fetching U.S. GDP data and charting it), a human-in-the-loop node could:

  • Review the Researcher’s data (e.g., “Is this GDP data complete?”).
  • Evaluate the Chart Generator’s output (e.g., “Does the bar graph look correct?”).
  • Provide feedback like “add more years” (routes to Researcher) or “fix the chart colors” (routes to Chart Generator). This enhances the workflow by incorporating human judgment, especially for subjective tasks like chart aesthetics or data relevance.

Why It Works

  • Flexibility of Nodes: As noted in prior answers, LangGraph nodes can be non-LLM functions (e.g., human input), making human-in-the-loop a natural fit.
  • Dynamic Routing: Conditional edges (notebook’s router) allow the graph to adapt based on human feedback, similar to how it handles tool calls or “FINAL ANSWER.”
  • Class Relevance: Human-in-the-loop aligns with the slides’ emphasis on controllability and complex workflows, enhancing the multi-agent system with human oversight.

Conclusion

A LangGraph node can include a human-in-the-loop to evaluate results, and this is easily implemented by adding a node that collects human feedback and updates the state, with conditional edges to route the workflow accordingly. This approach fits the notebook’s multi-agent structure, the class’s focus on iterative workflows, and LangGraph’s flexibility for custom nodes. It’s particularly valuable for tasks requiring human judgment, such as validating data or approving outputs, as in the notebook’s GDP charting task or the class’s report-writing objective.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment