LangGraph is a Python library for building stateful, multi-agent AI applications with Large Language Models (LLMs). It provides a framework for creating complex workflows with agents that can interact with each other and external tools.
LangGraph uses a state machine approach where:
- Nodes represent computational steps or agent actions
- Edges control flow between nodes
- State persists across interactions and is passed between nodes
StateGraph
: Defines the structure of the applicationSTART
andEND
nodes: Special nodes marking entry and exit points- State schema: Defined using Pydantic's
BaseModel
orTypedDict
- Node functions: Transform state and return updates
- Edges: Connect nodes and can be conditional
# Using TypedDict
from typing import TypedDict, Annotated
class State(TypedDict):
messages: list
assessment: dict
# Using Pydantic
from pydantic import BaseModel
class State(BaseModel):
messages: list
assessment: dict
Reducers control how state updates are processed when nodes modify the graph's state:
def add(left, right):
return left + right
class State(TypedDict):
messages: Annotated[list[AnyMessage], add] # Messages will be added, not replaced
assessment: dict # This will be replaced when updated
LangGraph provides a specialized add_messages
reducer for handling message lists.
from langgraph.graph import StateGraph, START, END
from typing import TypedDict
class State(TypedDict):
foo: str
def my_node(state: State):
return {"foo": state["foo"] + "bar"}
builder = StateGraph(State)
builder.add_node("my_node", my_node)
builder.add_edge(START, "my_node")
builder.add_edge("my_node", END)
graph = builder.compile()
def routing_function(state: State):
return "node_b" if state["foo"] else END
builder = StateGraph(State)
builder.add_node("node_a", node_a)
builder.add_conditional_edges(START, lambda _: "node_a")
builder.add_conditional_edges("node_a", routing_function)
builder.add_node("node_b", lambda state: state)
from langgraph.prebuilt import create_react_agent
def create_agent(llm, tools):
# Create a ReAct-style agent
agent = create_react_agent(
llm, # Language model
tools, # List of available tools
)
return agent
LangGraph allows checkpointing application state, enabling:
- Pausing and resuming execution
- Human-in-the-loop interactions
- Persistence across multiple interactions
Subgraphs provide ways to manage complex workflow states:
# Define a subgraph with interrupt
subgraph = StateGraph(SubGraphState)
subgraph = subgraph.compile(interrupt_before=["specific_node"])
# Get subgraph state
state = graph.get_state(config, subgraphs=True)
# Update subgraph state
graph.update_state(subgraph_config, {"property": "value"})
LangGraph supports streaming of:
- Events (like tool call feedback)
- Tokens from LLM calls
- Intermediate state changes
# Linear flow
builder.add_edge(START, "node_1")
builder.add_edge("node_1", "node_2")
builder.add_edge("node_2", END)
# Conditional flow
builder.add_conditional_edges(
"node_1",
lambda state: "node_2" if condition(state) else "node_3"
)
# Define tools
tools = [tool1, tool2, tool3]
# Create agent with tools
agent = create_react_agent(llm, tools)
# Add agent to graph
builder.add_node("agent", agent)
- Node functions should be pure (no side effects)
- State should be immutable (return new state, don't modify existing state)
- Use reducers for complex state updates
- Consider checkpointing for long-running or interactive workflows
- Use subgraphs for complex nested workflows
This guide is based on:
- LangGraph 0.3.28
- langchain-core 0.3.51
- langchain-openai 0.3.12