This document provides visual representations of the key flows implemented in the LangGraph RAG System using Mermaid diagrams.
graph TD
A[User Query] --> B[Agent/LLM]
B -->|Tool needed| C[Tool Executor]
B -->|No tools needed| J[Final Response]
C --> D[Execute Tool]
D --> E[Tool Result]
E --> B
%% Human Intervention Branch
B -->|Needs human input| F[Human Interface]
F --> G[Human Review]
G -->|Continue| B
G -->|Override| H[Human Override]
H --> I[Modified Response]
I --> J
This flow demonstrates how an agent can use tools to answer queries, with a provision for human intervention when needed. The agent can decide to use tools, complete the response directly, or request human input at critical decision points.
graph TD
A[User Query] --> B[Query Router]
B -->|Vectorstore route| C[Vector Store Retrieval]
B -->|Web search route| D[Web Search Tool]
C --> E[Relevance Grader]
D --> E
E -->|Irrelevant docs| F[Modify Retrieval Strategy]
F -->|Try alternative| B
E -->|Relevant docs| G[Generate Response]
G --> H[Hallucination Checker]
H -->|Hallucinations detected| I[Regenerate with Constraints]
I --> G
H -->|No hallucinations| J[Final Response]
The adaptive RAG flow demonstrates how the system can intelligently route queries to the appropriate knowledge source, evaluate the relevance of retrieved documents, and check for hallucinations in generated responses, making adjustments when necessary.
graph TD
A[User Query] --> B[Question Rewriter]
B --> C[Rewritten Query]
C --> D[Document Retrieval]
D --> E[Document Filter]
E -->|Insufficient docs| F[Web Search Tool]
F --> G[Merged Documents]
E -->|Sufficient docs| G
G --> H[Response Generator]
H --> I[Final Response]
The CRAG flow shows how queries are rewritten to optimize retrieval, documents are filtered for relevance, and web search can be incorporated when necessary to provide a more comprehensive context for response generation.
graph TD
A[User Query] --> B[Query Analyzer]
B -->|Simple query| C[Direct RAG]
B -->|Complex query| D[Multi-step Reasoning]
B -->|Ambiguous query| E[Clarification Node]
C --> F[Generate Response]
D --> G[Sub-question Generator]
G --> H[Parallel Retrievals]
H --> I[Answer Synthesizer]
I --> F
E --> J[Generate Clarifying Questions]
J --> K[User Clarification]
K --> B
F --> L[Final Response]
This advanced flow demonstrates how more complex orchestration can handle different query types through different paths, including breaking down complex questions, seeking clarification, and synthesizing information from multiple retrievals.
graph TD
A[System Response] --> B[Evaluation Module]
B --> C[Accuracy Assessment]
B --> D[Relevance Assessment]
B --> E[Completeness Assessment]
C --> F[Aggregate Metrics]
D --> F
E --> F
F --> G[Feedback Loop]
G -->|Adjust retrieval parameters| H[Retrieval Module]
G -->|Adjust generation parameters| I[Generation Module]
G -->|Log for training data| J[Training Dataset]
The evaluation and feedback flow shows how system responses can be evaluated across multiple dimensions, with feedback mechanisms to improve retrieval and generation over time.
graph TD
A[User Interface] --> B[Query Processing]
B --> C[Workflow Orchestrator]
C --> D[Human-in-the-Loop]
C --> E[Adaptive RAG]
C --> F[Contextual RAG]
D --> G[Response Generator]
E --> G
F --> G
G --> H[Response to User]
I[External Tools] --> D
I --> E
I --> F
J[Vector Stores] --> E
J --> F
K[Human Operators] --> D
This overview diagram shows how the different components and flows interconnect in the complete system architecture.