Created
March 27, 2025 19:23
-
-
Save rlancemartin/17f171baead1dfe43ffc2985fa33eabb to your computer and use it in GitHub Desktop.
LangChain JS Docs llms.txt
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# LangChain JavaScript | |
This guide provides explanations of the key concepts behind the LangChain framework and AI applications more broadly. | |
[](https://js.langchain.com/docs/tutorials/): LLM should read this page when seeking an overview of LangChain's tutorials, looking for guidance on building LLM applications, or wanting to learn about the LangSmith tool. This page provides an introduction to various tutorials for building applications with LangChain components like chat models, vectorstores, and agents. It also highlights tutorials for more complex orchestration with LangGraph and covers the LangSmith tool for tracing, monitoring and evaluating LLM applications. | |
[](https://js.langchain.com/docs/how_to/): LLM should read this page when building an application with LangChain, troubleshooting issues, or understanding key LangChain components and concepts. This page provides how-to guides covering installation, key features like structured output and tool calling, LangChain components like prompt templates and document loaders, usage scenarios like Q&A and chatbots, and integrations with LangGraph.js and LangSmith. | |
## Concepts | |
[](https://js.langchain.com/docs/concepts/why_langchain/): LLM should read this page when building applications with LLMs, considering different LLM providers, or evaluating and testing LLM applications. Covers standardized interfaces for LLM components, orchestration of complex applications with LangGraph, and observability/evaluation with LangSmith. | |
[](https://js.langchain.com/docs/concepts/architecture/): LLM should read this page when wanting an overview of LangChain's architecture, wanting to understand the different components and packages, or needing to decide which packages are relevant for their use case. Provides an overview of the core LangChain packages (@langchain/core, langchain, integration packages, @langchain/community, @langchain/langgraph) and LangSmith, describing their purposes and interdependencies. | |
[](https://js.langchain.com/docs/concepts/chat_models/): LLM should read this page when developing applications or systems that use chat models, integrating chat models with external tools or data sources, or structuring chat model outputs in a specific format. This page covers key concepts related to chat models, including their interfaces, features like tool calling and structured outputs, integrations available, and advanced topics like caching and context windows. | |
[](https://js.langchain.com/docs/concepts/messages/): LLM should read this page when needing to understand chat model messaging, wanting to send or receive messages to/from chat models, or looking to work with different chat model providers. Explains the structure and components of messages used with chat models, including roles, content, and metadata. Covers LangChain's unified message format and different message types like HumanMessage, AIMessage, and ToolMessage. | |
[](https://js.langchain.com/docs/concepts/chat_history/): LLM should read this page when learning about chat history in LangChain, managing chat history for context windows, and understanding conversation patterns. This page covers the concept of chat history, which is a record of the conversation between a user and a chat model. It explains different conversation patterns, provides guidelines for managing chat history, and lists related resources. | |
[](https://js.langchain.com/docs/concepts/tools/): LLM should read this page when seeking to understand LangChain's tool abstractions, creating custom tools, or integrating tools into applications This page covers the concept of tools in LangChain, how to create and configure tools, use them directly or with models supporting tool calling, handle tool artifacts, and access related resources like toolkits. | |
[](https://js.langchain.com/docs/concepts/tool_calling/): LLM should read this page when [developing applications that need to call external tools or APIs, integrating tool calling capabilities into language models, understanding the concept of tool calling] 'Overview of tool calling in LangChain, including key concepts, recommended usage, tool creation, tool binding, tool calling by the model, tool execution, and best practices.' | |
[](https://js.langchain.com/docs/concepts/structured_outputs/): LLM should read this page when needing to return structured output, needing to parse JSON output, wanting to use LangChain's structured output helper Covers defining output schemas, instructing models to conform to schemas, tools for structured output including tool calling and JSON mode, and LangChain's .withStructuredOutput() helper | |
[Memory](https://langchain-ai.github.io/langgraphjs/concepts/memory/): LLM should read this page when building a memory component for an AI agent, implementing short-term or long-term memory, or managing conversation history. The page covers memory concepts including short-term conversational memory and long-term persistent memory, techniques for managing long message histories, writing and updating memory profiles or collections, and representing memories as instructions or examples. | |
[](https://js.langchain.com/docs/concepts/multimodality/): LLM should read this page when [working with multimodal data like text, audio, images or video], [building multimodal applications or components] Explains multimodality and how it can appear in various LangChain components like chat models, embedding models, and vector stores. Discusses current capabilities and limitations around multimodal inputs, outputs, and processing. | |
[](https://js.langchain.com/docs/concepts/runnables/): LLM should read this page when trying to understand the Runnable interface in LangChain.js, invoking Runnables in various ways, or composing Runnables together. Provides an overview of the Runnable interface, its methods for invoking and streaming, input/output types, and RunnableConfig for setting runtime options. Covers creating custom Runnables from functions. | |
[](https://js.langchain.com/docs/concepts/streaming/): LLM should read this page when building applications with streaming data, integrating streaming into LLM workflows, or understanding LangChain's streaming capabilities. This page covers the importance of streaming for improving UX in LLM applications, what data to stream (LLM outputs, pipeline progress, custom data), LangChain's streaming APIs (stream, streamEvents), writing custom data to the stream, and auto-streaming chat models. | |
[](https://js.langchain.com/docs/concepts/lcel/): LLM should read this page when explaining the LangChain Expression Language (LCEL), composing custom chains using LCEL, or migrating from legacy LangChain chains. The page covers the benefits of LCEL, how to decide whether to use LCEL or LangGraph, LCEL's composition primitives like RunnableSequence and RunnableParallel, the composition syntax with the pipe method, and a note on migrating from legacy chains. | |
[](https://js.langchain.com/docs/concepts/document_loaders/): LLM should read this page when needing to load data from various sources, needing to understand LangChain's document loader interface, or looking for integrations with specific data sources. Document loaders are designed to load data from different sources like files, web pages, databases, etc. The page covers the available integrations, the document loader interface, and related resources. | |
[](https://js.langchain.com/docs/concepts/retrieval/): LLM should read this page when developing a retrieval system for unstructured or structured data, when building an application that needs to interface with vector stores or databases, when integrating natural language queries with specialized retrieval systems. Covers retrieval concepts like query analysis (rewriting and construction), retrieval systems (vector indexes, relational/graph databases), and LangChain's retriever abstraction for unified querying across different data sources. | |
[](https://js.langchain.com/docs/concepts/text_splitters/): LLM should read this page when developing applications involving long documents, retrieving relevant information, or summarizing text. Introduces text splitters, why text splitting is useful, and different approaches including length-based, text-structure based, document-structure based, and semantic meaning based splitting. | |
[](https://js.langchain.com/docs/concepts/embedding_models/): LLM should read this page when determining how to embed text for semantic similarity tasks, when selecting an embedding model for a retrieval system, when measuring document similarity through embeddings. Covers embedding text into vector representations that capture meaning, selecting and using embedding models from different providers, measuring vector similarity through mathematical operations like cosine similarity. | |
[](https://js.langchain.com/docs/concepts/vectorstores/): LLM should read this page when needing to index and retrieve information based on semantic similarity, needing to work with vector embeddings of data, or needing to understand how vector stores fit into the LangChain architecture. Vector stores are specialized data stores that enable indexing and retrieving information based on vector representations called embeddings. The page covers the vector store interface, initialization, adding/deleting documents, similarity search, metadata filtering, and advanced retrieval techniques. | |
[](https://js.langchain.com/docs/concepts/retrievers/): LLM should read this page when: 1) developing a retrieval component, 2) integrating with different retrieval systems, 3) understanding advanced retrieval patterns. The page covers the concept of retrievers in LangChain, their interface, common types (search APIs, databases, lexical search, vector stores), and advanced patterns like ensembling and source document retention. | |
[](https://js.langchain.com/docs/concepts/rag/): LLM should read this page when building a question answering system with external data, incorporating up-to-date or domain-specific knowledge, or addressing hallucination issues. Explains retrieval augmented generation (RAG), a technique that combines language models with external knowledge bases to enhance model capabilities, covering key concepts of retrieval systems and adding retrieved data to model prompts. | |
[](https://js.langchain.com/docs/concepts/agents/): LLM should read this page when creating an agent system, transitioning from AgentExecutor to LangGraph, or understanding LangChain's approach to agents. This page introduces the concept of agents in LangChain, which enable language models to take actions beyond text output. It recommends using LangGraph for building agents and provides resources on LangGraph agent architectures and pre-built agents. The page also discusses the legacy AgentExecutor concept and guides for migrating from it to LangGraph. | |
[](https://js.langchain.com/docs/concepts/prompt_templates/): LLM should read this page when building a language model application that needs to construct prompts, utilize conditional prompting, or insert message history into prompts This page covers Prompt Templates, which help translate user input and parameters into instructions for language models by formatting strings, messages, and placeholders | |
[](https://js.langchain.com/docs/concepts/output_parsers/): LLM should read this page when learning how to parse unstructured LLM outputs into structured data formats, wanting to fix/correct errors in LLM outputs, or needing to handle structured data like JSON/XML/CSV. This page covers different types of output parsers in LangChain for parsing model outputs into structured formats like JSON, XML, CSV, as well as parsers for datetime, fixing outputs, and returning generic structured data. | |
[](https://js.langchain.com/docs/concepts/few_shot_prompting/): LLM should read this page when it needs to understand few-shot prompting, determine how to generate and format examples, and decide how many examples to include. Covers how to generate examples, the number of examples to include, how to select examples, and how to format examples for few-shot prompting with chat models. | |
[](https://js.langchain.com/docs/concepts/example_selectors/): LLM should read this page when: - It needs to use few-shot prompting examples with chat models - It needs to dynamically select and format examples for prompts Example selectors are classes responsible for selecting and formatting examples into prompts for few-shot prompting with language models. | |
[](https://js.langchain.com/docs/concepts/callbacks/): LLM should read this page when building applications that require logging, monitoring, or streaming data generated during the application's execution The page explains how LangChain's callback system allows hooking into various stages of an application by subscribing to events triggered during execution, and provides details on available events, callback handlers, and how to pass callbacks to modules. | |
[](https://js.langchain.com/docs/concepts/tracing/): LLM should read this page when seeking information on tracing in LangChain, understanding how traces work, and diagnosing issues in chains and agents. Tracing provides observability into chains and agents by capturing a series of runs or steps from input to output, including calls to models, retrievers, tools, and sub-chains. | |
[](https://js.langchain.com/docs/concepts/evaluation/): LLM should read this page when evaluating the performance of an LLM-powered application, tracking results over time, defining metrics for evaluation This page discusses the importance of evaluating LLM applications and how LangSmith can help with creating datasets, defining metrics, running evaluations, and tracking results over time. | |
## Glossary | |
[#aimessagechunk](https://js.langchain.com/docs/concepts/messages/#aimessagechunk): LLM should read this page when [understanding the structure of messages in chat models, working with chat history, and handling different types of message content] [This page covers the concept of messages in chat models, detailing the different types of messages (e.g., user, assistant, system), their structure, and how they are used to manage chat history and handle various types of content like text and multimodal data.] | |
[#aimessage](https://js.langchain.com/docs/concepts/messages/#aimessage): LLM should read this page when needing an overview of messages in LangChain, when working with chat models, and when implementing conversation flows. Messages are the communication units used by chat models; they represent input, output, context, and metadata. The page covers message roles, content types, conversation structure, LangChain message classes, and OpenAI message formats. | |
[StructuredTool | LangChain.js](https://api.js.langchain.com/classes/_langchain_core.tools.StructuredTool.html/): LLM should read this page when trying to understand how to use structured tools in the LangChain.js library or when integrating structured tools into an application. This page describes the StructuredTool class, which is an abstract base class for tools that accept input of any shape defined by a Zod schema. It provides information on the class properties, methods, and how to create and use structured tools. | |
[](https://js.langchain.com/docs/concepts/runnables/): LLM should read this page when trying to understand the Runnable interface in LangChain.js, invoking Runnables in various ways, or composing Runnables together. Provides an overview of the Runnable interface, its methods for invoking and streaming, input/output types, and RunnableConfig for setting runtime options. Covers creating custom Runnables from functions. | |
[#tool-binding](https://js.langchain.com/docs/concepts/tool_calling/#tool-binding): LLM should read this page when building applications that require calling external tools or APIs, integrating language models with other systems, or leveraging structured data inputs/outputs. Summarizes the concept of tool calling, which allows language models to directly interact with systems by adhering to their input schemas. Covers creating tools, binding them to models, invoking tool calls, and executing tools. | |
[#caching](https://js.langchain.com/docs/concepts/chat_models/#caching): LLM should read this page when developing applications using large language models, integrating with chat models, or working with structured model outputs. Covers chat models in LangChain: overview, key features, model integrations, interface, tool calling, structured outputs, multimodality, context windows, caching, and related resources. | |
[#context-window](https://js.langchain.com/docs/concepts/chat_models/#context-window): LLM should read this page when [building applications using chat models, incorporating tool calling or structured outputs, working with multimodal data] [Overview of chat models, key features, integrations, interface, parameters, tool calling, structured outputs, multimodality, context window, caching, related resources] | |
[#conversation-patterns](https://js.langchain.com/docs/concepts/chat_history/#conversation-patterns): LLM should read this page when needing an overview of chat history in LangChain, understanding how to manage chat history effectively, and learning about conversation patterns involving users, assistants, and tools. This page discusses the concept of chat history - a record of the conversation between the user and chat model, its importance in maintaining context, managing chat history to avoid exceeding the context window, guidelines for preserving correct conversation structure, and related resources for implementing memory using chat history. | |
[Document | LangChain.js](https://api.js.langchain.com/classes/_langchain_core.documents.Document.html/): LLM should read this page when needing to interact with documents in LangChain.js, needing to understand the Document class, or needing to work with document metadata. This page defines the Document class, which represents a document with page content, metadata, and an optional ID. It implements the DocumentInterface and DocumentInput interfaces. | |
[](https://js.langchain.com/docs/concepts/embedding_models/): LLM should read this page when determining how to embed text for semantic similarity tasks, when selecting an embedding model for a retrieval system, when measuring document similarity through embeddings. Covers embedding text into vector representations that capture meaning, selecting and using embedding models from different providers, measuring vector similarity through mathematical operations like cosine similarity. | |
[#humanmessage](https://js.langchain.com/docs/concepts/messages/#humanmessage): LLM should read this page when learning about chat model interactions, understanding the structure of messages, or working with LangChain's message types. This page covers the concept of messages in LangChain, detailing the components of messages, conversation structure, LangChain's standardized message types (SystemMessage, HumanMessage, AIMessage, etc.), and how they relate to OpenAI's message format. | |
[#input-and-output-types](https://js.langchain.com/docs/concepts/runnables/#input-and-output-types): LLM should read this page when needing to invoke, batch, or stream LangChain components, and when working with the RunnableConfig. The page explains the Runnable interface foundational to LangChain components, its input/output types, RunnableConfig properties, streaming APIs, creating custom Runnables, and optimized parallel execution. | |
[#integration-packages](https://js.langchain.com/docs/concepts/architecture/#integration-packages): LLM should read this page when needing an overview of the LangChain architecture, understanding the different packages and their purposes, or evaluating if LangChain is a good fit for their use case. Provides an overview of the core LangChain packages (langchain/core, langchain, integration packages, langchain/community, langchain/langgraph) as well as LangSmith, explaining the role and purpose of each package in the LangChain ecosystem. | |
[](https://js.langchain.com/docs/concepts/runnables/): LLM should read this page when trying to understand the Runnable interface in LangChain.js, invoking Runnables in various ways, or composing Runnables together. Provides an overview of the Runnable interface, its methods for invoking and streaming, input/output types, and RunnableConfig for setting runtime options. Covers creating custom Runnables from functions. | |
[#json-mode](https://js.langchain.com/docs/concepts/structured_outputs/#json-mode): LLM should read this page when needing to return structured output from a model, requiring models to conform to a specified schema, or using LangChain's built-in structured output functionality. Covers defining output schemas, instructing models to produce structured output conforming to schemas via tool calling or JSON mode, and using LangChain's withStructuredOutput method to streamline the process. | |
[#langchaincommunity](https://js.langchain.com/docs/concepts/architecture/#langchaincommunity): LLM should read this page when needing an overview of LangChain's architecture, understanding the different component packages, or identifying where specific integrations are located. The page outlines LangChain's core packages, integration packages, community packages, LangGraph orchestration framework, and LangSmith developer platform. | |
[#langchaincore](https://js.langchain.com/docs/concepts/architecture/#langchaincore): LLM should read this page when looking for an overview of LangChain's architecture, when needing to understand the different packages that make up LangChain, or when seeking information about the core packages and components. This page provides an overview of the hierarchical organization of LangChain, outlining its core packages like @langchain/core, langchain, integration packages, @langchain/community, @langchain/langgraph, and LangSmith, and describing their roles within the framework. | |
[#langchain](https://js.langchain.com/docs/concepts/architecture/#langchain): LLM should read this page when learning about the LangChain architecture, exploring different components and packages, or investigating integrations and ecosystem tools. The page provides an overview of the LangChain framework's hierarchical organization, explaining the roles of @langchain/core, langchain, integration packages, @langchain/community, @langchain/langgraph, and LangSmith. | |
[#langchainlanggraph](https://js.langchain.com/docs/concepts/architecture/#langchainlanggraph): LLM should read this page when seeking an overview of LangChain's architecture, evaluating whether to use LangChain, or planning to contribute to LangChain. Outlines the different packages that make up the LangChain framework, including @langchain/core, langchain, integration packages, @langchain/community, @langchain/langgraph, and LangSmith, along with their respective roles and responsibilities. | |
[#managing-chat-history](https://js.langchain.com/docs/concepts/chat_history/#managing-chat-history): LLM should read this page when managing the chat history in a conversational AI system, implementing memory for chat models, and understanding the structure of chat conversations. This page explains the concept of chat history, which is a record of the conversation between the user and a chat model. It covers conversation patterns, guidelines for managing chat history, and related resources for trimming messages and implementing memory. | |
[#openai-format](https://js.langchain.com/docs/concepts/messages/#openai-format): LLM should read this page when understanding message formats, working with chat models, or managing chat history. Explains different message types (SystemMessage, HumanMessage, AIMessage, ToolMessage) and their properties. Covers conversation structure, OpenAI formats, and how LangChain handles messages across providers. | |
[#propagation-of-runnableconfig](https://js.langchain.com/docs/concepts/runnables/#propagation-of-runnableconfig): LLM should read this page when needing to understand the Runnable interface, invoking and streaming Runnables, and configuring Runnables. The Runnable interface defines a standard way to invoke, batch, stream, inspect, and compose components like language models, retrievers, and tools. It covers input/output types, RunnableConfig for setting options like callbacks, creating Runnables from functions, and related concepts. | |
[#removemessage](https://js.langchain.com/docs/concepts/messages/#removemessage): LLM should read this page when introducing chat models into an application, designing chat flows, or integrating with chat APIs. This page covers the structure of messages exchanged with chat models, including roles, content, and metadata. It explains LangChain's standardized message types and how they map to common chat model APIs. | |
[#role](https://js.langchain.com/docs/concepts/messages/#role): LLM should read this page when: 1) Developing chat applications with LangChain 2) Understanding the structure and components of chat messages 3) Needing to work with different chat model providers Messages are the unit of communication in chat models. This page covers the structure of messages, different message types (HumanMessage, AIMessage, ToolMessage, etc.), the role and content within messages, and how LangChain standardizes messages across providers. | |
[#runnableconfig](https://js.langchain.com/docs/concepts/runnables/#runnableconfig): LLM should read this page when learning about the Runnable interface in LangChain, creating custom Runnables, or configuring Runnables at runtime. Explains the Runnable interface, batching/parallelization, streaming APIs, input/output types, RunnableConfig options (runName, tags, metadata, callbacks, etc.), and creating Runnables from functions. | |
[#standard-parameters](https://js.langchain.com/docs/concepts/chat_models/#standard-parameters): LLM should read this page when developing applications using chat models, integrating chat models with other systems, understanding the capabilities and limitations of chat models. Explains what chat models are, their features, integrations, interfaces, tool calling, structured outputs, multimodality, context windows, caching, and related concepts. | |
[](https://js.langchain.com/docs/concepts/streaming/): LLM should read this page when building applications with streaming data, integrating streaming into LLM workflows, or understanding LangChain's streaming capabilities. This page covers the importance of streaming for improving UX in LLM applications, what data to stream (LLM outputs, pipeline progress, custom data), LangChain's streaming APIs (stream, streamEvents), writing custom data to the stream, and auto-streaming chat models. | |
[#how-tokens-work-in-language-models](https://js.langchain.com/docs/concepts/tokens/#how-tokens-work-in-language-models): LLM should read this page when needing to understand what tokens are, how tokens work in language models, and details about token count correspondence. This page explains that tokens are the basic units processed by language models, how models tokenize input and generate token output, that tokens can represent multimodal data, why tokens are used instead of characters, and provides guidance on how tokens correspond to text length. | |
[](https://js.langchain.com/docs/concepts/tokens/): LLM should read this page when trying to understand how language models process and generate text, when working with token counts or quotas, and when handling multimodal inputs like images or audio. The page explains what tokens are, how language models use them to process text and other sequential data, why tokens are used instead of characters, how tokens correspond to text, and that tokens can represent multimodal data beyond just text. | |
[#tool-artifacts](https://js.langchain.com/docs/concepts/tools/#tool-artifacts): LLM should read this page when building applications that use tools with language models, adding tool-calling capabilities to models, or integrating external tools/functions with language models. Explains the concept of tools in LangChain, how to create and configure tools, pass tool outputs to models, best practices for designing tools, and toolkits for grouping related tools. | |
[#tool-binding](https://js.langchain.com/docs/concepts/tool_calling/#tool-binding): LLM should read this page when building applications that require calling external tools or APIs, integrating language models with other systems, or leveraging structured data inputs/outputs. Summarizes the concept of tool calling, which allows language models to directly interact with systems by adhering to their input schemas. Covers creating tools, binding them to models, invoking tool calls, and executing tools. | |
[](https://js.langchain.com/docs/concepts/tools/): LLM should read this page when seeking to understand LangChain's tool abstractions, creating custom tools, or integrating tools into applications This page covers the concept of tools in LangChain, how to create and configure tools, use them directly or with models supporting tool calling, handle tool artifacts, and access related resources like toolkits. | |
[#toolkits](https://js.langchain.com/docs/concepts/tools/#toolkits): LLM should read this page when building applications that involve calling external tools/APIs, integrating with external systems or services, or using chat models to delegate tasks to external utilities. This page covers the 'Tool' abstraction in LangChain, which allows associating functions with schemas to enable chat models to invoke external capabilities. It explains key concepts, how to create custom tools, handle tool outputs/artifacts, and best practices. | |
[#toolmessage](https://js.langchain.com/docs/concepts/messages/#toolmessage): LLM should read this page when needing to understand how messages work in chat models, when needing to work with messages in a unified format across different chat model providers, and when needing to understand the structure and content of conversations with chat models. Messages are the unit of communication in chat models, representing input/output and metadata; LangChain provides a unified message format and classes like HumanMessage, AIMessage, and ToolMessage to work with messages in a standardized way across different chat model providers. | |
[](https://js.langchain.com/docs/concepts/vectorstores/): LLM should read this page when needing to index and retrieve information based on semantic similarity, needing to work with vector embeddings of data, or needing to understand how vector stores fit into the LangChain architecture. Vector stores are specialized data stores that enable indexing and retrieving information based on vector representations called embeddings. The page covers the vector store interface, initialization, adding/deleting documents, similarity search, metadata filtering, and advanced retrieval techniques. | |
[#structured-output-method](https://js.langchain.com/docs/concepts/structured_outputs/#structured-output-method): LLM should read this page when needing to return structured data from a model, needing to define a schema for model output, needing to parse model output into a structured format. Explains concept of structured output, providing schema definitions, methods for instructing models to return structured output (tool calling, JSON mode), and LangChain's streamlined `withStructuredOutput` method. | |
## Integrations | |
### Chat Models | |
Key chat model integrations include: | |
[](https://js.langchain.com/docs/integrations/chat/bedrock/): LLM should read this page when needing an overview of the Bedrock chat model integration, setting up credentials, or using tool calling with the model. Covers setup steps like installing packages and configuring AWS credentials, instantiating BedrockChat with different models, invoking the model, chaining with prompt templates, and enabling tool calling functionality. | |
[](https://js.langchain.com/docs/integrations/chat/bedrock_converse/): LLM should read this page when configuring Amazon Bedrock Converse models for use with LangChain, integrating chat-based language models with tool calling capabilities, checking model features and setup requirements. Provides overview of Amazon Bedrock Converse chat models, covering integration details, supported features, setup instructions with installation steps, instantiation and invocation examples, chaining with prompts, tool calling support, and links to API reference. | |
[](https://js.langchain.com/docs/integrations/chat/anthropic/): LLM should read this page when integrating with Anthropic's ChatAnthropic model, using features like citations or prompt caching, or configuring custom clients. Covers setup, instantiation, chaining, content blocks, caching, custom headers/clients, and citation handling for ChatAnthropic model. | |
[](https://js.langchain.com/docs/integrations/chat/cloudflare_workersai/): LLM should read this page when integrating with Cloudflare Workers AI, using Cloudflare Workers AI for chat applications, or working with streaming chat models. This page provides instructions for setting up and using the ChatCloudflareWorkersAI integration, including obtaining credentials, installation, instantiation, invocation examples, and information on supported features like multimodal inputs and token-level streaming. | |
[](https://js.langchain.com/docs/integrations/chat/cohere/): LLM should read this page when needing to use Cohere's chat models for natural language tasks, when needing to perform web search as part of the model's response generation, or when needing to leverage Cohere's retrieval-augmented generation (RAG) capabilities. This page covers setting up and using the ChatCohere integration in LangChain.js, including installation, instantiation, invoking the model, chaining with prompts, using RAG, and connecting to web search connectors. It also provides an overview of the model's features and links to the API reference. | |
[](https://js.langchain.com/docs/integrations/chat/fireworks/): LLM should read this page when looking to integrate with Fireworks AI models, exploring chat model options, or implementing a language translation task. This page provides instructions for setting up and using the ChatFireworks chat model integration, including obtaining credentials, installation, instantiation, invocation examples, chaining with prompts, and an overview of supported features. | |
[](https://js.langchain.com/docs/integrations/chat/google_generativeai/): LLM should read this page when [working with Google AI's Generative models, needing to set up and use the ChatGoogleGenerativeAI model, implementing features like tool calling or context caching] [Provides setup instructions, code examples, and documentation for using Google AI's chat models (like Gemini) through the ChatGoogleGenerativeAI class in LangChain's @langchain/google-genai integration, covering features such as tool calling, code execution, context caching, and safety settings.] | |
[](https://js.langchain.com/docs/integrations/chat/google_vertex_ai/): LLM should read this page when needing to use Google's Vertex AI LLM models, integrate search and data retrieval with Vertex AI models, or leverage context caching functionality with Vertex AI models. Provides overview, setup instructions, code examples for instantiating ChatVertexAI, making invocations, integrating search/retrieval tools, enabling context caching, chaining with prompts. | |
[](https://js.langchain.com/docs/integrations/chat/groq/): LLM should read this page when learning how to use the ChatGroq chat model in LangChain, when looking for details on ChatGroq's capabilities, or when needing instructions on setting up and using ChatGroq. This page provides an overview of the LangChain integration with the Groq ChatGPT-like language model, including setup instructions, code examples for instantiating and invoking the model, and details on ChatGroq's features like structured JSON output and token streaming. | |
[](https://js.langchain.com/docs/integrations/chat/mistral/): LLM should read this page when needing to integrate with MistralAI chat models, use tool calling with MistralAI, or customize MistralAI with hooks Provides overview of MistralAI chat model integration, covers setup/installation, instantiation examples, using tool calling, and customizing with hooks like beforeRequest/requestError/response | |
[](https://js.langchain.com/docs/integrations/chat/ollama/): LLM should read this page when 1) using Ollama LLMs, 2) performing multimodal tasks, 3) working with structured outputs This page provides an overview of integrating LangChain with Ollama, an open-source platform for running large language models locally. It covers setup, instantiation, invoking the model, chaining, using tools, structured outputs, JSON mode, and multimodal capabilities. | |
[](https://js.langchain.com/docs/integrations/chat/openai/): LLM should read this page when integrating with OpenAI's language models, using OpenAI tools and models, or enabling specific features like structured output, tool calling, or audio input/output. Provides setup instructions, code examples, and guides for using OpenAI models in LangChain, covering key features like tool calling, structured output, built-in tools (web/file search), predicted outputs, audio input/output, and model-specific configurations. | |
[](https://js.langchain.com/docs/integrations/chat/togetherai/): LLM should read this page when seeking to use the Together AI LLM service, wanting to use a wide range of open source models, or wanting to leverage advanced LLM features like streaming This page covers setup and usage of the ChatTogetherAI model class, which allows accessing over 50 open-source models from Together AI with advanced capabilities like streaming, tool calling, and structured output | |
[](https://js.langchain.com/docs/integrations/chat/xai/): LLM should read this page when integrating with xAI language models, when looking for chat models that support specific features like tool calling and streaming, when learning how to set up authentication for xAI models This page covers integrating and using the ChatXAI language model from xAI, including setup/installation steps, model instantiation, making chat completions, chaining with prompts, and an overview of supported features like tool calling and streaming. | |
[](https://js.langchain.com/docs/integrations/chat/): LLM should read this page when needing to integrate chat or conversational models, when building applications requiring multimodal support Covers integrations and usage details for various chat/conversational AI models from providers like OpenAI, Anthropic, Google, Microsoft, and more | |
## How-to guides | |
Here you'll find answers to “How do I….?” types of questions. | |
These guides are _goal-oriented_ and _concrete_; they're meant to help you complete a specific task. | |
[](https://js.langchain.com/docs/concepts/): LLM should read this page when developing applications using LangChain, when trying to understand key LangChain concepts, or when troubleshooting issues related to LangChain components. This page provides an overview and explanation of key concepts in LangChain, including chat models, retrieval, agents, tools, structured outputs, memory, multimodality, evaluation, and more. | |
[](https://js.langchain.com/docs/tutorials/): LLM should read this page when seeking an overview of LangChain's tutorials, looking for guidance on building LLM applications, or wanting to learn about the LangSmith tool. This page provides an introduction to various tutorials for building applications with LangChain components like chat models, vectorstores, and agents. It also highlights tutorials for more complex orchestration with LangGraph and covers the LangSmith tool for tracing, monitoring and evaluating LLM applications. | |
[LangChain.js](https://api.js.langchain.com/): LLM should read this page when looking for an open-source framework for building applications with LLMs, seeking documentation or guidance on contributing to LangChain.js, or understanding the relationship between LangChain.js and the Python LangChain package. Provides an overview of LangChain.js, a framework for developing applications powered by language models, including its key components, supported environments, documentation, contributing guidelines, and its relationship with the Python LangChain package. | |
### Installation | |
[](https://js.langchain.com/docs/how_to/installation/): LLM should read this page when configuring LangChain for various environments, installing LangChain packages, or troubleshooting installation issues. This page covers supported environments, installation instructions for LangChain and its ecosystem packages, guidance on avoiding multiple versions of core dependencies, and instructions for loading the library in different environments. | |
### Key features | |
This highlights functionality that is core to using LangChain. | |
[](https://js.langchain.com/docs/how_to/structured_output/): LLM should read this page when: 1) Wanting a model to return structured output matching a schema 2) Extracting data from text to insert into a database or downstream system 3) Learning different strategies for prompting models to output structured data The page covers using the `.withStructuredOutput()` method with JSON Schema or Zod to have models return structured output, prompting techniques with output parsers like JsonOutputParser or custom parsing, and specifying output methods for models supporting multiple options. | |
[](https://js.langchain.com/docs/how_to/tool_calling/): LLM should read this page when [learning how to pass tools to chat models for the models to call, understanding what tool calls are, learning how to bind model-specific tool formats] This page explains how to pass tools to chat models that support tool calling, describes the tool call data structure, and demonstrates binding model-specific tool formats. | |
[](https://js.langchain.com/docs/how_to/streaming/): LLM should read this page when building applications that need to stream responses from LLMs, parsers, or other components, and when implementing responsive user interfaces for language model applications. The page covers how to use the .stream() and .streamEvents() methods to stream output from LLMs, chains, prompts, parsers, and other components in LangChain, including filtering events and working with non-streaming components. | |
[](https://js.langchain.com/docs/how_to/debugging/): LLM should read this page when debugging LLM applications, when investigating the reasoning behind an LLM's response, or when trying to understand the internal steps of an LLM pipeline. This page discusses techniques for debugging LLM applications, including tracing, enabling verbose logging, and using custom callbacks for additional functionality. | |
### LangChain Expression Language (LCEL) | |
[Runnable | LangChain.js](https://api.js.langchain.com/classes/langchain_core.runnables.Runnable.html/): LLM should read this page when: 1) Learning about the core concepts and components of LangChain.js 2) Understanding how to create and customize Runnables 3) Integrating custom event streaming into Runnables The page describes the Runnable class in LangChain.js, which is an abstract base class for creating units of work that can be invoked, batched, streamed, and transformed. It provides documentation on the Runnable class's properties, methods, and inheritance hierarchy, as well as examples of creating custom Runnables and integrating event streaming. | |
[](https://js.langchain.com/docs/how_to/lcel_cheatsheet/): LLM should read this page when it needs a quick reference for key LangChain Expression Language (LCEL) primitives, when building applications that use LCEL, or when looking for code examples for common LCEL operations. Cheatsheet covering invoking runnables, batching/streaming runnables, composing runnables, parallel execution, handling failures, configuring execution, dynamic chaining, event streaming, output manipulation, and visualizing runnable graphs. | |
[](https://js.langchain.com/docs/how_to/sequence/): LLM should read this page when chaining runnables together, composing prompts and chains, or understanding input/output formatting for chaining. This page explains how to chain two or more runnables together into a sequence using the .pipe() method, and provides examples of chaining prompt templates, chat models, and output parsers, as well as combining chains and handling input/output formatting. | |
[](https://js.langchain.com/docs/how_to/streaming/): LLM should read this page when building applications that need to stream responses from LLMs, parsers, or other components, and when implementing responsive user interfaces for language model applications. The page covers how to use the .stream() and .streamEvents() methods to stream output from LLMs, chains, prompts, parsers, and other components in LangChain, including filtering events and working with non-streaming components. | |
[](https://js.langchain.com/docs/how_to/parallel/): LLM should read this page when [1) working with parallel processing, 2) formatting inputs/outputs between chained operations, 3) parallelizing prompts or retrievals] 'This page covers how to use RunnableParallel (RunnableMap) to invoke multiple operations in parallel, as well as how to format inputs/outputs between chained operations using RunnableParallel for manipulation.' | |
[](https://js.langchain.com/docs/how_to/binding/): LLM should read this page when binding runtime arguments to a Runnable, setting stop sequences for LLM responses, or attaching OpenAI tools to an LLM. This page explains how to use the .bind() method to pass runtime arguments to a Runnable, including setting stop sequences for LLM responses and attaching OpenAI tools to an LLM model. | |
[](https://js.langchain.com/docs/how_to/functions/): LLM should read this page when needing to run custom functions within LangChain, implementing custom streaming behavior, or coercing custom functions into runnables. Covers how to explicitly create runnables from custom functions using RunnableLambda, automatic coercion of functions into runnables in chains, passing run metadata to custom functions, and implementing streaming behavior with async generators. | |
[](https://js.langchain.com/docs/how_to/passthrough/): LLM should read this page when needing to pass data through a chain of runnables, needing to format input data for a prompt, or when working with retrieval and formatting the context. This page explains how to use RunnablePassthrough to pass data through chains, often in conjunction with RunnableParallel, to properly format inputs for prompts. It provides an example using retrieval to get context and pass through the question for a prompt. | |
[](https://js.langchain.com/docs/how_to/assign/): LLM should read this page when needing to pass data between steps in a chain, needing to accumulate data across multiple parallel steps, or needing to stream intermediate results. This page explains how to use RunnablePassthrough.assign() to add values to a chain's state dictionary, which is useful for formatting data flowing through chains. It covers using assign() with parallel and streaming examples. | |
[](https://js.langchain.com/docs/how_to/message_history/): LLM should read this page when building chatbots, working with conversational AI, or developing applications that require tracking conversation history. This page explains how to add message history to LangChain chains, enabling multi-turn conversations and persisting state across different conversational threads. | |
[](https://js.langchain.com/docs/how_to/routing/): LLM should read this page when routing execution within a chain, conditionally returning runnables from a RunnableLambda, or using a RunnableBranch. Covers how to do routing in LangChain Expression Language, allowing non-deterministic chains where output of a previous step defines the next step, using custom functions or RunnableBranch with condition/runnable pairs. | |
[](https://js.langchain.com/docs/how_to/fallbacks/): LLM should read this page when handling API errors from LLMs, handling long inputs that exceed context window, or fallback to better models for specific tasks. This page discusses how to set up fallback models or chains when the primary model or chain fails due to API errors, input length exceeding context window, or inability to generate desired output format. | |
[](https://js.langchain.com/docs/how_to/cancel_execution/): LLM should read this page when interrupting long-running chains, aborting agent execution, or canceling streaming responses. Explains how to use AbortController to cancel chain and stream execution, including code examples for both scenarios. | |
### Components | |
These are the core building blocks you can use when building applications. | |
#### Prompt templates | |
[](https://js.langchain.com/docs/concepts/prompt_templates/): LLM should read this page when building a language model application that needs to construct prompts, utilize conditional prompting, or insert message history into prompts This page covers Prompt Templates, which help translate user input and parameters into instructions for language models by formatting strings, messages, and placeholders | |
[](https://js.langchain.com/docs/how_to/few_shot_examples/): LLM should read this page when needing to provide few-shot examples to guide an LLM's response, when using an example selector to dynamically select relevant examples, or when creating a FewShotPromptTemplate. This page explains how to create a formatter for few-shot examples, generate an example set, pass examples to a FewShotPromptTemplate, and use an example selector like SemanticSimilarityExampleSelector to dynamically select relevant examples based on similarity. | |
[](https://js.langchain.com/docs/how_to/few_shot_examples_chat/): LLM should read this page when 1) using few-shot examples to prompt chat models 2) selecting and formatting few-shot examples dynamically This page explains how to provide chat models with example inputs and outputs (few-shot examples) to guide generation, covering both fixed examples and dynamically selecting examples based on input similarity. | |
[](https://js.langchain.com/docs/how_to/prompts_partial/): LLM should read this page when needing to partially format prompts with variables, needing to partially format prompts with functions that return strings This page explains how to partially format prompt templates by applying some but not all input variables, either with string values or functions that return string values. | |
[](https://js.langchain.com/docs/how_to/prompts_composition/): LLM should read this page when needing to compose prompts from multiple parts, wanting to reuse prompt components across prompts, or working with chat prompts. This page covers how to compose string prompts and chat prompts from multiple templates or messages, as well as how to use the PipelinePromptTemplate for more complex prompt composition. | |
#### Example selectors | |
[](https://js.langchain.com/docs/concepts/example_selectors/): LLM should read this page when: - It needs to use few-shot prompting examples with chat models - It needs to dynamically select and format examples for prompts Example selectors are classes responsible for selecting and formatting examples into prompts for few-shot prompting with language models. | |
[](https://js.langchain.com/docs/how_to/example_selectors/): LLM should read this page when needing to select relevant examples for few-shot prompting, when building a question-answering system, or when developing a task that requires example-based learning. This page explains how to use example selectors in LangChain to choose relevant examples from a larger set for inclusion in few-shot prompts, covering custom implementations, built-in selectors based on similarity or length, and integration with prompts. | |
[](https://js.langchain.com/docs/how_to/example_selectors_length_based/): LLM should read this page when needing to select examples by length for a prompt, when having a limited context window size, or when concerned about exceeding the token limit. This page explains how to use the LengthBasedExampleSelector to dynamically select which examples to include in a prompt based on the length of the input, in order to avoid exceeding the total prompt length limit. | |
[](https://js.langchain.com/docs/how_to/example_selectors_similarity/): LLM should read this page when: 1) Selecting examples for few-shot prompting based on similarity to the input. 2) Using a pre-initialized vector store for example selection. 3) Applying custom filters or retrievers for example selection. This page covers how to use the SemanticSimilarityExampleSelector to select examples based on similarity to the input, including loading from an existing vector store, applying metadata filters, and using custom vector store retrievers. | |
[](https://js.langchain.com/docs/how_to/example_selectors_langsmith/): LLM should read this page when looking to select relevant few-shot examples from a LangSmith dataset, looking to use a LangSmith dataset for few-shot prompting a model. This page explains how to load and index a LangSmith dataset, query for similar examples from the dataset, and use those examples for few-shot prompting an LLM. | |
#### Chat models | |
[](https://js.langchain.com/docs/concepts/chat_models/): LLM should read this page when developing applications or systems that use chat models, integrating chat models with external tools or data sources, or structuring chat model outputs in a specific format. This page covers key concepts related to chat models, including their interfaces, features like tool calling and structured outputs, integrations available, and advanced topics like caching and context windows. | |
[](https://js.langchain.com/docs/how_to/tool_calling/): LLM should read this page when [learning how to pass tools to chat models for the models to call, understanding what tool calls are, learning how to bind model-specific tool formats] This page explains how to pass tools to chat models that support tool calling, describes the tool call data structure, and demonstrates binding model-specific tool formats. | |
[](https://js.langchain.com/docs/how_to/structured_output/): LLM should read this page when: 1) Wanting a model to return structured output matching a schema 2) Extracting data from text to insert into a database or downstream system 3) Learning different strategies for prompting models to output structured data The page covers using the `.withStructuredOutput()` method with JSON Schema or Zod to have models return structured output, prompting techniques with output parsers like JsonOutputParser or custom parsing, and specifying output methods for models supporting multiple options. | |
[](https://js.langchain.com/docs/how_to/chat_model_caching/): LLM should read this page when needing to cache responses from chat models, setting up caching with different storage solutions, or understanding the benefits of caching. This page covers how to enable caching for chat models, both with an in-memory cache and various external caching solutions like Redis, Upstash Redis, Vercel KV, and Cloudflare KV, to improve performance and reduce costs by avoiding redundant API calls. | |
[](https://js.langchain.com/docs/how_to/custom_chat/): LLM should read this page when: 1) Needing to create a custom chat model wrapper 2) Wanting to integrate a custom chat model with LangChain's callback system 3) Needing to enable tracing for a custom chat model 'Covers how to extend SimpleChatModel or BaseChatModel classes to create a custom chat model wrapper with required methods like _call and _generate, optional streaming support, and enabling tracing by implementing the invocationParams method.' | |
[](https://js.langchain.com/docs/how_to/logprobs/): LLM should read this page when it needs to get token-level log probabilities for a generated response, it wants to get alternate potential generations at each step of a model's response, or it needs to understand how to access logprobs in LangChain. This page explains how to get token-level log probabilities representing the likelihood of each token in a response from OpenAI models in LangChain, including seeing top alternate tokens at each step. It covers setting parameters and accessing logprobs from responses. | |
[](https://js.langchain.com/docs/how_to/chat_streaming/): LLM should read this page when needing to stream chat model responses, when needing to stream granular events from chains containing chat models, when needing examples of streaming chat model outputs This page explains how to stream chat model responses and events using the standard Runnable interface methods, provides code examples for several chat model providers, and covers key details around streaming granularity. | |
[](https://js.langchain.com/docs/how_to/chat_token_usage_tracking/): LLM should read this page when tracking token usage for chat models, streaming chat responses, or using callbacks to get model output and token usage. This page explains how to access token usage information from chat model responses, how to get token usage during streaming, and how to use callbacks to retrieve full model output including token usage. | |
[](https://js.langchain.com/docs/how_to/tool_results_pass_to_model/): LLM should read this page when it needs to pass tool outputs to chat models, when it needs to use chat models to call tools, when it needs to invoke tool functions using arguments generated by a chat model. This page explains how to define tools, instantiate a chat model, get the model to generate tool calls, invoke the tool functions using the arguments provided by the model, and feed the tool results back to the model to generate a final answer. | |
[](https://js.langchain.com/docs/how_to/tool_streaming/): LLM should read this page when needing to stream tool calls from an LLM, needing to parse streamed tool call data, wanting to understand how tool call streaming works This page explains how to stream tool calls from an LLM, how the streamed data is structured with ToolCallChunk objects, and how to accumulate and parse the streamed tool call data. | |
[](https://js.langchain.com/docs/how_to/tools_few_shot/): LLM should read this page when: 1) Using few-shot prompting with tool calling; 2) Demonstrating a new operation or concept to a tool; 3) Providing examples to guide tool calling behavior. This page explains how to provide few-shot examples in the form of simulated conversations to steer a chat model's tool calling behavior, with a specific example of teaching a "divide" operation represented by a custom symbol. | |
[](https://js.langchain.com/docs/how_to/tool_choice/): LLM should read this page when needing to force a certain tool choice for an LLM application, when using chat models with tools, or when needing a model to select and use a specific tool. This page covers how to force an LLM to select a specific tool for a task by using the 'tool_choice' parameter, including forcing it to use any tool, as well as code examples demonstrating the functionality. | |
[](https://js.langchain.com/docs/how_to/tool_calling_parallel/): LLM should read this page when [learning how to disable parallel tool calling for OpenAI models, binding tools to OpenAI models, calling tools through OpenAI models] [This page explains how to disable parallel tool calling for OpenAI models, which by default call tools in parallel. It shows code examples for binding tools to OpenAI models and invoking tool calls, with the parallel_tool_calls parameter set to false to force sequential tool calling instead of parallel.] | |
[](https://js.langchain.com/docs/how_to/chat_models_universal_init/): LLM should read this page when: 1) Initializing and configuring different chat models in a LLM application 2) Allowing end users to specify the model provider and model to use 3) Declaratively binding tools, configuring structured output, etc. with a configurable model The page covers how to use the initChatModel() helper to initialize different chat model integrations from OpenAI, Anthropic, Google, etc. It shows how to create configurable models that can be initialized with different models/providers at runtime, including specifying configurable fields and default values. It also demonstrates using configurable models declaratively with tool binding, structured output, etc. | |
#### Messages | |
[###message-types](https://js.langchain.com/docs/concepts/###message-types): LLM should read this page when building AI applications, integrating external tools or data sources, or understanding core LangChain concepts. Conceptual guide explaining key LangChain concepts like chat models, message types, tool calling, structured outputs, retrieval, RAG, agents, prompts, callbacks, tracing, and evaluation. | |
[](https://js.langchain.com/docs/how_to/trim_messages/): LLM should read this page when [needing to trim messages to fit within model context windows, working with long message histories, chaining trimming functionality] 'How to trim messages using various strategies, including getting last/first tokens, writing custom token counters, chaining trimming with other components, using with chat message histories.' | |
[](https://js.langchain.com/docs/how_to/filter_messages/): LLM should read this page when needing to filter message lists, needing to filter by message type/id/name, needing to chain message filtering with other operations. Describes how to filter lists of messages (e.g. Human, AI, System) by including/excluding specific types, names, or IDs. Shows basic usage and chaining examples. | |
[](https://js.langchain.com/docs/how_to/merge_message_runs/): LLM should read this page when it needs to merge consecutive messages of the same type, when it needs to chain the merging operation with other components, or when it needs a reference for the mergeMessageRuns() utility function. The page explains how to use the mergeMessageRuns() function to merge consecutive messages of the same type (e.g., multiple system messages, human messages, or AI messages). It covers basic usage, chaining with other components, and provides an API reference. | |
#### LLMs | |
[](https://js.langchain.com/docs/concepts/text_llms/): LLM should read this page when learning about older language models, implementing string-in/string-out models, or understanding the difference between chat models and traditional language models. This page explains that LangChain has implementations for older language models that take a string as input and return a string as output, referred to as string-in, string-out LLMs. It recommends using the newer chat models instead. | |
[](https://js.langchain.com/docs/how_to/llm_caching/): LLM should read this page when looking to reduce latency and costs by caching model responses, when considering different caching options or when integrating a caching solution. This page covers how to cache LLM and chat model responses in memory or using third-party caching services like Redis, Momento, Upstash Redis, Vercel KV and Cloudflare KV. | |
[](https://js.langchain.com/docs/how_to/custom_llm/): LLM should read this page when building a custom LLM class, implementing custom LLM capabilities like streaming, and handling custom metadata. Shows how to create a custom LLM class by extending the LLM or BaseLLM class, implementing required methods like _call and _generate, and handling options like streaming and passing metadata. | |
[](https://js.langchain.com/docs/how_to/streaming_llm/): LLM should read this page when they need to stream responses token-by-token from an LLM, when they need to use a callback handler to process each token incrementally, or when they need to handle streaming differently based on whether the provider supports token-by-token streaming. This page explains how to use the .stream() method to get a readable stream of tokens from an LLM and how to use a CallbackHandler to handle streaming events like processing each new token. | |
[](https://js.langchain.com/docs/how_to/llm_token_usage_tracking/): LLM should read this page when developing applications that use language models, tracking token usage, integrating with OpenAI or other providers that support token usage tracking. This page explains how to track token usage for language model calls, including a code example that uses callbacks to log the token usage for an OpenAI request. | |
#### Output parsers | |
[](https://js.langchain.com/docs/concepts/output_parsers/): LLM should read this page when learning how to parse unstructured LLM outputs into structured data formats, wanting to fix/correct errors in LLM outputs, or needing to handle structured data like JSON/XML/CSV. This page covers different types of output parsers in LangChain for parsing model outputs into structured formats like JSON, XML, CSV, as well as parsers for datetime, fixing outputs, and returning generic structured data. | |
[](https://js.langchain.com/docs/how_to/output_parser_structured/): LLM should read this page when 1) it needs to parse structured output from a model response, 2) it wants to validate the structured output against a schema, 3) it wants to stream partial structured output. The page covers how to use LangChain's StructuredOutputParser to parse model responses into structured data formats like JSON, validate against a schema, and stream partial outputs. | |
[](https://js.langchain.com/docs/how_to/output_parser_json/): LLM should read this page when [parsing JSON output from a language model, prompting a language model to return JSON structured data, streaming partial JSON chunks from a language model] [This page explains how to use the JsonOutputParser class to prompt a language model to generate JSON structured output conforming to a specified schema, and how to handle streaming of partial JSON chunks until the full output is received.] | |
[](https://js.langchain.com/docs/how_to/output_parser_xml/): LLM should read this page when [needing to parse XML output from an LLM, looking to format LLM responses as XML, seeking examples of prompting an LLM for structured XML data] [Guide on using LangChain's XMLOutputParser to instruct language models to generate output formatted as XML, with examples of parsing and customizing the XML tags] | |
[](https://js.langchain.com/docs/how_to/output_parser_fixing/): LLM should read this page when trying to parse structured output from an LLM, handling errors when parsing structured output, attempting to fix formatting errors in output The page explains how to use the OutputFixingParser to attempt to fix formatting errors when parsing structured output from an LLM by calling another LLM to fix the formatting. | |
#### Document loaders | |
[](https://js.langchain.com/docs/concepts/document_loaders/): LLM should read this page when needing to load data from various sources, needing to understand LangChain's document loader interface, or looking for integrations with specific data sources. Document loaders are designed to load data from different sources like files, web pages, databases, etc. The page covers the available integrations, the document loader interface, and related resources. | |
[](https://js.langchain.com/docs/how_to/document_loader_csv/): Line 1: 'LLM should read this page when loading CSV data, extracting specific columns from CSV data, or handling CSV input in general' Line 2: 'Provides code examples for loading CSV data with options to extract all columns or a single specified column. Covers setup, usage, and resulting Document objects.' | |
[](https://js.langchain.com/docs/how_to/document_loader_directory/): LLM should read this page when needing to load multiple document types from a directory, needing to load documents efficiently from a directory, or wanting an example of how to do so. Explains how to use the DirectoryLoader class to load documents of different file types from a directory, including JSON, CSV, text, and JSON lines formats. | |
[](https://js.langchain.com/docs/how_to/document_loader_pdf/): LLM should read this page when needing to load PDF files into LangChain, handling PDF files with multiple pages, or using custom pdfjs builds. Covers how to load PDF documents, options for handling page splits, eliminating extra spaces, and using custom pdfjs builds. | |
[](https://js.langchain.com/docs/how_to/document_loader_custom/): LLM should read this page when implementing custom document loaders, extending base document loaders, or extending text/buffer loaders. Explains how to subclass BaseDocumentLoader, TextLoader, and BufferLoader to create custom document loaders for different data sources. | |
[](https://js.langchain.com/docs/how_to/document_loader_html/): LLM should read this page when loading HTML files to use as input data, when integrating web scraped HTML data, or when working with HTML documents in general. Provides steps to install and set up UnstructuredLoader for parsing HTML files, along with code examples for loading HTML into LangChain documents. | |
[](https://js.langchain.com/docs/how_to/document_loader_markdown/): LLM should read this page when loading Markdown files for downstream processing, parsing Markdown elements like titles and lists, or retaining document structure during ingestion. Covers loading Markdown files into LangChain Document objects, including basic usage, parsing Markdown elements, and retaining structure when chunking by titles. | |
#### Text splitters | |
[](https://js.langchain.com/docs/concepts/text_splitters/): LLM should read this page when developing applications involving long documents, retrieving relevant information, or summarizing text. Introduces text splitters, why text splitting is useful, and different approaches including length-based, text-structure based, document-structure based, and semantic meaning based splitting. | |
[](https://js.langchain.com/docs/how_to/recursive_text_splitter/): LLM should read this page when needing to split long text into smaller chunks, when handling large documents for tasks like question answering, and when preprocessing text for embeddings or model input. The page covers how to use the RecursiveCharacterTextSplitter to recursively split text by specified characters or separators, with options to control chunk size and overlap. It shows examples of splitting raw text or Document objects. | |
[](https://js.langchain.com/docs/how_to/character_text_splitter/): LLM should read this page when needing to split text into chunks by character, wanting to control chunk size and overlap, or creating Document objects from text. Explains how to use CharacterTextSplitter to split text by character separator into chunks of specified character length, with options for chunk overlap and metadata propagation. | |
[](https://js.langchain.com/docs/how_to/code_splitter/): LLM should read this page when trying to split code in a programming language into chunks, when working with code in different formats like HTML or Markdown, when needing to split text while preserving code structure and syntax. This page explains how to use the RecursiveCharacterTextSplitter to split code in various programming languages like JavaScript, Python, Markdown, LaTeX, HTML, Solidity, and PHP into chunks while respecting code structure and syntax. | |
[](https://js.langchain.com/docs/how_to/split_by_token/): LLM should read this page when splitting long text into chunks, handling token limits, and using tokenizers compatible with LLMs. This page explains how to split text into chunks based on token count using the js-tiktoken tokenizer, which is tuned for OpenAI models, and provides code examples using the TokenTextSplitter class. | |
#### Embedding models | |
[](https://js.langchain.com/docs/concepts/embedding_models/): LLM should read this page when determining how to embed text for semantic similarity tasks, when selecting an embedding model for a retrieval system, when measuring document similarity through embeddings. Covers embedding text into vector representations that capture meaning, selecting and using embedding models from different providers, measuring vector similarity through mathematical operations like cosine similarity. | |
[](https://js.langchain.com/docs/how_to/embed_text/): LLM should read this page when needing to embed text for semantic search, when wanting to convert text to vector representations, when learning how to use different embedding models. This page explains how to use LangChain's Embeddings class to embed queries and documents into vector representations, enabling techniques like semantic search. | |
[](https://js.langchain.com/docs/how_to/caching_embeddings/): LLM should read this page when learning about caching embeddings, working with embeddings, optimizing embedding performance. This page explains how to cache and avoid recomputing embeddings using a CacheBackedEmbeddings wrapper, including examples with in-memory and Redis caches. | |
#### Vector stores | |
[#vectorstores](https://js.langchain.com/docs/concepts/#vectorstores): LLM should read this page when building LLM applications, needing conceptual explanations, or wanting to understand LangChain's key components. Provides conceptual explanations of core LangChain components like chat models, message history, tools, memory, multimodality, retrieval, agents, prompts, and evaluation; defines key terminology. | |
[](https://js.langchain.com/docs/how_to/vectorstores/): LLM should read this page when it needs to create and query a vector store, it needs to choose the right vector store for its use case, or it needs to load data into a vector store. The page covers how to create a new vector store index from documents or text chunks, how to query a vector store for similar vectors, and provides guidance on choosing the appropriate vector store integration based on different criteria. | |
#### Retrievers | |
[](https://js.langchain.com/docs/concepts/retrievers/): LLM should read this page when: 1) developing a retrieval component, 2) integrating with different retrieval systems, 3) understanding advanced retrieval patterns. The page covers the concept of retrievers in LangChain, their interface, common types (search APIs, databases, lexical search, vector stores), and advanced patterns like ensembling and source document retention. | |
[](https://js.langchain.com/docs/how_to/vectorstore_retriever/): LLM should read this page when: - Wanting to use a vector store to retrieve relevant data for question answering - Building a retrieval-augmented generation (RAG) application - Composing chains involving retrievers 'This page explains how to convert a vector store into a retriever, which allows for easy chaining with other components like language models. It walks through the steps of initializing a vector store, creating a retriever from it, composing a question answering chain, and invoking that chain to get answers from the retrieved data.' | |
[](https://js.langchain.com/docs/how_to/multiple_queries/): Line 1: 'LLM should read this page when building a question answering system, implementing retrieval-augmented generation, or exploring techniques to improve retrieval accuracy' Line 2: 'This page explains how to use the MultiQueryRetriever to generate multiple queries from a given user input and retrieve relevant documents by combining the results across all generated queries. It covers the basic setup, customization options for prompts and output parsing, and provides code examples.' | |
[](https://js.langchain.com/docs/how_to/contextual_compression/): LLM should read this page when needing to retrieve relevant information from documents, when dealing with long documents that contain irrelevant content, when filtering for relevant content based on a query. This page explains how to use contextual compression to filter and shorten documents based on relevance to a given query, covering methods like LLMChainExtractor, EmbeddingsFilter, and combining document transformers. | |
[](https://js.langchain.com/docs/how_to/custom_retriever/): LLM should read this page when it needs to create a custom retriever class, it wants to understand how to implement the _getRelevantDocuments method, or it needs an example of a custom retriever class. This page provides instructions and an example for creating a custom retriever class that extends the BaseRetriever class by implementing the _getRelevantDocuments method to fetch and return relevant documents from a data source. | |
[](https://js.langchain.com/docs/how_to/ensemble_retriever/): LLM should read this page when needing to combine results from multiple retrievers, when ensembling retriever results for better performance, when doing hybrid search combining different retrieval approaches. This page explains how to use the EnsembleRetriever to combine and rerank results from multiple retrievers, demonstrating how combining different retriever types like keyword matching and vector similarity can improve performance. | |
[](https://js.langchain.com/docs/how_to/multi_vector/): LLM should read this page when building a question-answering system, improving retrieval performance, or experimenting with multiple embeddings for a document. This page covers methods to generate multiple vector embeddings for a single document, including using smaller text chunks, creating summaries, and generating hypothetical questions. It provides code examples for each method using the MultiVectorRetriever. | |
[](https://js.langchain.com/docs/how_to/parent_document_retriever/): LLM should read this page when needing to retrieve full documents from a set of small chunks, wanting to improve retrieval relevance and context, or using reranking for better final answers. This page covers techniques for retrieving parent documents from smaller text chunks, adding contextual headers to chunks, and reranking retrieved chunks to produce more relevant and precise final outputs. | |
[](https://js.langchain.com/docs/how_to/self_query/): LLM should read this page when building applications with retrieval augmented generation functionality, when needing to filter data based on metadata attributes, or when working with structured queries on vector stores. The page explains how to use the SelfQueryRetriever class to generate structured queries from natural language queries, and apply those structured queries to a vector store to retrieve relevant documents based on both semantic similarity and metadata attribute filters. | |
[](https://js.langchain.com/docs/how_to/time_weighted_vectorstore/): LLM should read this page when: 1) Building a retriever that weighs documents by both semantic similarity and time decay 2) Implementing a "forgetful" vector store that prioritizes recently accessed documents This page covers the TimeWeightedVectorStoreRetriever, which scores documents by a combination of their semantic similarity to the query and a time decay factor that decreases scores for less recently accessed documents. | |
[](https://js.langchain.com/docs/how_to/reduce_retrieval_latency/): LLM should read this page when trying to reduce retrieval latency, implementing adaptive retrieval, or using the MatryoshkaRetriever class. Discusses techniques like using sub-vectors for initial fast retrieval followed by re-ranking with full embeddings, and provides code examples for setting up the MatryoshkaRetriever. | |
#### Indexing | |
Indexing is the process of keeping your vectorstore in-sync with the underlying data source. | |
[](https://js.langchain.com/docs/how_to/indexing/): LLM should read this page when needing to keep a vector store in sync with underlying data sources, when incremental updates are made to the data, or when data is deleted from the source. Covers using the LangChain indexing API to reindex data into a vector store, handling deduplication, deletions, and incremental updates while avoiding unnecessary recomputation of embeddings. | |
#### Tools | |
[](https://js.langchain.com/docs/concepts/tools/): LLM should read this page when seeking to understand LangChain's tool abstractions, creating custom tools, or integrating tools into applications This page covers the concept of tools in LangChain, how to create and configure tools, use them directly or with models supporting tool calling, handle tool artifacts, and access related resources like toolkits. | |
[](https://js.langchain.com/docs/how_to/custom_tools/): LLM should read this page when creating custom tools for agents, understanding how to return artifacts from tool execution, creating tools with structured or single inputs. Explains how to create custom tools using StructuredToolParams, tool wrapper function, DynamicStructuredTool, and DynamicTool classes. Covers returning artifacts in addition to text from tools. | |
[](https://js.langchain.com/docs/how_to/tools_builtin/): LLM should read this page when learning about built-in LangChain tools, understanding how to use toolkits, and looking for guidance on working with custom tools. This page introduces the concept of LangChain tools, demonstrates how to use built-in tools and toolkits, and provides links to further information on creating custom tools and working with tool-related features. | |
[](https://js.langchain.com/docs/how_to/tool_calling/): LLM should read this page when [learning how to pass tools to chat models for the models to call, understanding what tool calls are, learning how to bind model-specific tool formats] This page explains how to pass tools to chat models that support tool calling, describes the tool call data structure, and demonstrates binding model-specific tool formats. | |
[](https://js.langchain.com/docs/how_to/tool_results_pass_to_model/): LLM should read this page when it needs to pass tool outputs to chat models, when it needs to use chat models to call tools, when it needs to invoke tool functions using arguments generated by a chat model. This page explains how to define tools, instantiate a chat model, get the model to generate tool calls, invoke the tool functions using the arguments provided by the model, and feed the tool results back to the model to generate a final answer. | |
[](https://js.langchain.com/docs/how_to/tools_few_shot/): LLM should read this page when: 1) Using few-shot prompting with tool calling; 2) Demonstrating a new operation or concept to a tool; 3) Providing examples to guide tool calling behavior. This page explains how to provide few-shot examples in the form of simulated conversations to steer a chat model's tool calling behavior, with a specific example of teaching a "divide" operation represented by a custom symbol. | |
[](https://js.langchain.com/docs/how_to/tool_runtime/): LLM should read this page when needing to pass runtime values like user ID to tools, needing to bind runtime values to a tool This page covers how to pass runtime values like user IDs to tools, either by using context variables or by dynamically generating tools at runtime and binding values to them | |
[](https://js.langchain.com/docs/how_to/tools_error/): LLM should read this page when [handling tool errors in LangChain, implementing error handling and fallbacks for tool calls, catching and reporting errors during tool invocation] [This page explains how to handle errors when calling tools with an LLM, including using try/except blocks to catch errors, providing fallback chains with different models, and summarizing tool invocation errors.] | |
[](https://js.langchain.com/docs/how_to/tool_choice/): LLM should read this page when needing to force a certain tool choice for an LLM application, when using chat models with tools, or when needing a model to select and use a specific tool. This page covers how to force an LLM to select a specific tool for a task by using the 'tool_choice' parameter, including forcing it to use any tool, as well as code examples demonstrating the functionality. | |
[](https://js.langchain.com/docs/how_to/tool_calling_parallel/): LLM should read this page when [learning how to disable parallel tool calling for OpenAI models, binding tools to OpenAI models, calling tools through OpenAI models] [This page explains how to disable parallel tool calling for OpenAI models, which by default call tools in parallel. It shows code examples for binding tools to OpenAI models and invoking tool calls, with the parallel_tool_calls parameter set to false to force sequential tool calling instead of parallel.] | |
[](https://js.langchain.com/docs/how_to/tool_configure/): LLM should read this page when: 1) Developing a custom tool in LangChain and needing to access the RunnableConfig 2) Wanting to invoke other runnables like chat models or retrievers from within a custom tool 'This page explains how to access the RunnableConfig object from within a custom LangChain tool, which allows properly configuring and tracing sub-invocations of other runnables like chat models or retrievers.' | |
[](https://js.langchain.com/docs/how_to/tool_stream_events/): LLM should read this page when configuring custom tools to stream events, accessing child runnables inside a custom tool, or enabling automatic configuration passing in certain environments. Explains how to manually pass RunnableConfig to child runnables within a custom tool to enable event streaming, provides an example of streaming chat model token generation, and shows how to enable automatic config passing using AsyncLocalStorage. | |
[](https://js.langchain.com/docs/how_to/tool_artifacts/): LLM should read this page when needing to return artifacts or metadata from a tool call, needing to pass additional data through a tool that shouldn't be seen by the LLM Tool outputs can include a content portion meant for the LLM and an artifact portion meant for downstream components. This page shows how to define tools that distinguish between content and artifacts, and how to access those artifacts when calling the tool from an LLM. | |
[](https://js.langchain.com/docs/how_to/convert_runnable_to_tool/): LLM should read this page when converting a Runnable to a Tool, using converted Tools in an Agent, using a RAG chain as a Tool This page explains how to convert LangChain Runnables to Tools that can be used by Agents and Chat Models, including examples of converting a document retriever and a RAG chain to Tools. | |
[](https://js.langchain.com/docs/how_to/tools_prompting/): LLM should read this page when: 1) Building chains that allow the LLM to call tools or functions dynamically based on its own output 2) Integrating ad-hoc tool calling capabilities into language models or chatbots 3) Creating prompts that instruct the LLM to output JSON blobs representing tool names and arguments This page covers how to create a prompt that instructs an LLM to output JSON blobs specifying tools to invoke and their arguments. It demonstrates rendering tool descriptions, parsing the LLM's JSON output, invoking the chosen tool, and handling multiple tools. | |
#### Agents | |
:::note | |
[Introduction](https://langchain-ai.github.io/langgraphjs/): LLM should read this page when introducing LangGraph, understanding the LangGraph ecosystem, deploying agents with LangGraph Platform LangGraph is an open-source framework for building controllable AI agents; it integrates with LangChain and can be deployed with the LangGraph Platform; the page covers the ecosystem, deployment options, and additional resources. | |
::: | |
[](https://js.langchain.com/docs/how_to/agent_executor/): LLM should read this page when: 1) Learning how to build agents with LangChain 2) Integrating external tools and data sources into an LLM application 3) Adding memory capabilities to an LLM agent 'Guide covering how to create agents (AI systems that use LLM reasoning to call external tools) with LangChain, including defining tools like web search and vector databases, binding tools to LLM, creating agents, running agents, and adding memory.' | |
[](https://js.langchain.com/docs/how_to/migrate_agent/): LLM should read this page when migrating from legacy LangChain agents to LangGraph, accessing examples of using LangGraph with memory and streaming, and understanding how to handle parameters like maxIterations and returnIntermediateSteps. This page covers how to migrate from using LangChain's AgentExecutor to using LangGraph's createReactAgent function, including examples for handling memory, streaming, maxIterations, and returnIntermediateSteps. It explains how to use system messages, message modifiers, and checkpointers for handling memory in LangGraph. It also shows how to stream and access intermediate steps using LangGraph. | |
#### Callbacks | |
[](https://js.langchain.com/docs/concepts/callbacks/): LLM should read this page when building applications that require logging, monitoring, or streaming data generated during the application's execution The page explains how LangChain's callback system allows hooking into various stages of an application by subscribing to events triggered during execution, and provides details on available events, callback handlers, and how to pass callbacks to modules. | |
[](https://js.langchain.com/docs/how_to/callbacks_runtime/): LLM should read this page when needing to pass callbacks into a module at runtime, when wanting to avoid manually attaching callbacks to each nested object, when needing examples of passing callbacks to LangChain modules. This page explains how to pass CallbackHandlers to LangChain modules at runtime using the 'callbacks' keyword argument, preventing the need to manually attach handlers to each nested object. It provides examples of using LangChain's built-in ConsoleCallbackHandler. | |
[](https://js.langchain.com/docs/how_to/callbacks_attach/): LLM should read this page when: 1) Attaching callbacks to a chain to run for all nested module runs 2) Reusing callbacks across multiple executions of a chain 3) Binding callbacks to avoid passing them in for each invocation 'This page explains how to attach callbacks (e.g. ConsoleCallbackHandler) to a chain using the withConfig() method, allowing callbacks to be reused across multiple executions without needing to pass them in each time. It also mentions that bound callbacks will run for nested module runs within the chain.' | |
[](https://js.langchain.com/docs/how_to/callbacks_constructor/): LLM should read this page when needing to pass callbacks into a module constructor, wanting to track events for a specific instance, or looking for an example of using ConsoleCallbackHandler. Shows how to pass callbacks directly into the constructor of modules like ChatAnthropicLLM, allowing callbacks to be triggered only for that instance and any nested runs. | |
[](https://js.langchain.com/docs/how_to/custom_callbacks/): LLM should read this page when needing to create custom callback handlers for LangChain, needing to handle specific events in the LangChain execution lifecycle This page explains how to create custom callback handlers in LangChain by defining functions to handle desired events, and attaching the handler object to relevant components. | |
[](https://js.langchain.com/docs/how_to/callbacks_serverless/): LLM should read this page when needing to await callbacks in serverless environments, needing to make callbacks blocking, and needing to use the awaitAllCallbacks method. This page explains how to make callbacks blocking in serverless environments by setting the LANGCHAIN_CALLBACKS_BACKGROUND environment variable to false or using the awaitAllCallbacks method, and provides examples showing the difference in timing between blocking and non-blocking callbacks. | |
[](https://js.langchain.com/docs/how_to/callbacks_custom_events/): LLM should read this page when: 1) Trying to dispatch custom events from within a Runnable 2) Wanting to surface custom events via the Stream Events API or custom callback handlers 'This page explains how to dispatch custom callback events with a name and data payload from within a Runnable, which can then be consumed via the Stream Events API or custom callback handlers. It provides code examples for both methods.' | |
#### Custom | |
All of LangChain components can easily be extended to support your own versions. | |
[](https://js.langchain.com/docs/how_to/custom_chat/): LLM should read this page when: 1) Needing to create a custom chat model wrapper 2) Wanting to integrate a custom chat model with LangChain's callback system 3) Needing to enable tracing for a custom chat model 'Covers how to extend SimpleChatModel or BaseChatModel classes to create a custom chat model wrapper with required methods like _call and _generate, optional streaming support, and enabling tracing by implementing the invocationParams method.' | |
[](https://js.langchain.com/docs/how_to/custom_llm/): LLM should read this page when building a custom LLM class, implementing custom LLM capabilities like streaming, and handling custom metadata. Shows how to create a custom LLM class by extending the LLM or BaseLLM class, implementing required methods like _call and _generate, and handling options like streaming and passing metadata. | |
[](https://js.langchain.com/docs/how_to/custom_retriever/): LLM should read this page when it needs to create a custom retriever class, it wants to understand how to implement the _getRelevantDocuments method, or it needs an example of a custom retriever class. This page provides instructions and an example for creating a custom retriever class that extends the BaseRetriever class by implementing the _getRelevantDocuments method to fetch and return relevant documents from a data source. | |
[](https://js.langchain.com/docs/how_to/document_loader_custom/): LLM should read this page when implementing custom document loaders, extending base document loaders, or extending text/buffer loaders. Explains how to subclass BaseDocumentLoader, TextLoader, and BufferLoader to create custom document loaders for different data sources. | |
[](https://js.langchain.com/docs/how_to/custom_callbacks/): LLM should read this page when needing to create custom callback handlers for LangChain, needing to handle specific events in the LangChain execution lifecycle This page explains how to create custom callback handlers in LangChain by defining functions to handle desired events, and attaching the handler object to relevant components. | |
[](https://js.langchain.com/docs/how_to/custom_tools/): LLM should read this page when creating custom tools for agents, understanding how to return artifacts from tool execution, creating tools with structured or single inputs. Explains how to create custom tools using StructuredToolParams, tool wrapper function, DynamicStructuredTool, and DynamicTool classes. Covers returning artifacts in addition to text from tools. | |
[](https://js.langchain.com/docs/how_to/callbacks_custom_events/): LLM should read this page when: 1) Trying to dispatch custom events from within a Runnable 2) Wanting to surface custom events via the Stream Events API or custom callback handlers 'This page explains how to dispatch custom callback events with a name and data payload from within a Runnable, which can then be consumed via the Stream Events API or custom callback handlers. It provides code examples for both methods.' | |
#### Generative UI | |
[](https://js.langchain.com/docs/how_to/generative_ui/): LLM should read this page when developing generative UIs using LangChain, when creating interactive UI elements from LLM outputs, when using LangChain's built-in utilities for streaming React UI from tool calls This page provides guidance and code examples for building LLM-generated UIs using LangChain, including utilities to yield React elements inside runnables/tool calls, stream UI updates to client, and expose server actions as client hooks | |
[](https://js.langchain.com/docs/how_to/stream_agent_client/): LLM should read this page when: 1) Streaming agent data to the client-side in a web application 2) Building a web interface for a LangChain agent with streaming capabilities This page demonstrates how to use LangChain and the 'ai' package to create a server-side agent that streams its execution steps and outputs to the client-side of a web application in real-time using React Server Components. | |
[](https://js.langchain.com/docs/how_to/stream_tool_client/): LLM should read this page when: 1) Developing a React application that needs to stream structured output data from a language model to the client 2) Integrating LangChain with a frontend application and streaming tool results This page explains how to stream structured output from LangChain tools to a React client using React Server Components. It covers setting up the required dependencies, defining a schema for the tool data, implementing an executeTool function to handle the tool logic and streaming, binding the tool to a language model, and piping the components together to stream the results to the client. | |
#### Multimodal | |
[](https://js.langchain.com/docs/how_to/multimodal_inputs/): LLM should read this page when needing to pass multimodal data like images to a model, needing to pass multiple images to a model, or needing to vary how images are passed (as byte strings vs URLs). This page shows how to pass multimodal input like images directly to models like ChatAnthropic and ChatOpenAI, including techniques for passing single images, multiple images, images as byte strings vs URLs, and handling different model providers' expected input formats. | |
[](https://js.langchain.com/docs/how_to/multimodal_prompts/): LLM should read this page when using multimodal inputs like images in prompts, providing multimodal examples to models, or working with models that can process multimodal data. This page demonstrates how to pass images and other multimodal data to language models using prompt templates in LangChain, including handling single and multiple images. | |
[](https://js.langchain.com/docs/how_to/tool_calls_multimodal/): LLM should read this page when: 1) Using multimodal inputs like images or audio with language models 2) Calling tools from language models using multimodal data The page demonstrates how to pass multimodal data like images or audio to language models like OpenAI's GPT-4, Anthropic's Claude, and Google's Gemini, and how to call tools from these models using the multimodal inputs. | |
### Use cases | |
These guides cover use-case specific details. | |
#### Q&A with RAG | |
Retrieval Augmented Generation (RAG) is a way to connect LLMs to external sources of data. | |
[](https://js.langchain.com/docs/tutorials/rag/): LLM should read this page when building a question-answering application over unstructured data, implementing retrieval-augmented generation (RAG), or understanding the key steps involved in RAG. This tutorial covers building a basic RAG application, including loading and indexing data, retrieving relevant information using a vector store, and generating an answer from a language model using the retrieved context. It also discusses query analysis for optimizing retrieval. | |
[](https://js.langchain.com/docs/how_to/qa_chat_history_how_to/): LLM should read this page when developing a conversational Q&A application, incorporating chat history into an LLM, or building a retrieval-augmented generation system. Covers how to incorporate chat history into a Q&A system using chains or agents, including techniques like task decomposition and prompting strategies. Explains setting up retrievers, creating agents/chains, managing state, and tying components together. | |
[](https://js.langchain.com/docs/how_to/qa_streaming/): LLM should read this page when building a question-answering application, implementing streaming responses, or adding document sources to generated outputs This page explains how to stream responses from a question-answering chain, including retrieving and displaying the document sources used to generate the answer. | |
[](https://js.langchain.com/docs/how_to/qa_sources/): LLM should read this page when developing question-answering applications, building retrieval-augmented generation systems, or working with document retrieval and sourcing. Provides guidance on returning source documents used to generate a response, alongside implementation examples using LangChain and vector stores. | |
[](https://js.langchain.com/docs/how_to/qa_citations/): LLM should read this page when building a question answering application that requires citation for the sources used, when implementing a tool that assists users in finding relevant information and citations, when exploring techniques for extracting citations and source references from text. The page covers various methods to return citations from a question-answering chain, including using tool calling to have the model output document IDs and relevant quotes, using direct prompting with output parsing, and post-processing retrieved documents to compress and filter the content before passing it to the model. | |
[](https://js.langchain.com/docs/how_to/qa_per_user/): LLM should read this page when building a retrieval app that needs to handle data from multiple users, when building a question-answering system that needs to retrieve user-specific data This page explains how to configure a retrieval chain to only retrieve information for a specific user by setting up the retriever to support multiple users and passing the user identifier as a configurable field to the chain. | |
#### Extraction | |
Extraction is when you use LLMs to extract structured information from unstructured text. | |
[](https://js.langchain.com/docs/tutorials/extraction/): LLM should read this page when: - Building an information extraction application - Learning how to extract structured data from unstructured text - Needing to handle multiple entities or long text during extraction 'This guide covers how to build an extraction chain using LangChain to extract structured information from unstructured text. It explains how to define a schema, create an extractor, handle multiple entities, and use reference examples to improve performance. It also mentions additional resources for handling long text and using parsing approaches.' | |
[](https://js.langchain.com/docs/how_to/extraction_examples/): LLM should read this page when needing to improve information extraction quality, needing to use few-shot examples to improve LLM performance, or needing to work with reference examples. The page explains how to use reference examples in the form of example input-output pairs to enhance an LLM's information extraction capabilities, including defining the schema, creating examples in the expected format, and creating an extractor to invoke with or without examples. | |
[](https://js.langchain.com/docs/how_to/extraction_long_text/): LLM should read this page when handling long text documents, considering different strategies for extracting information, and choosing the appropriate approach based on the trade-offs. This page discusses three main strategies to handle long text: using a larger context window LLM, chunking the text and extracting from each chunk (brute force), and retrieving only relevant chunks before extracting (RAG). It also covers common issues with each approach. | |
[](https://js.langchain.com/docs/how_to/extraction_parse/): LLM should read this page when [1) learning how to extract information without function calling, 2) exploring custom parsing approaches, 3) understanding structured output parsing] [The page explains how to use LLMs for information extraction without relying on function calling, covering both structured output parsing and custom parsing techniques with code examples.] | |
#### Chatbots | |
Chatbots involve using an LLM to have a conversation. | |
[](https://js.langchain.com/docs/tutorials/chatbot/): LLM should read this page when building a chatbot, when persisting conversation history, when managing long conversations This page explains how to build a chatbot using LangChain and LangGraph, including persisting conversation history, using prompt templates, and managing long conversations by trimming the message history. | |
[](https://js.langchain.com/docs/how_to/chatbots_memory/): LLM should read this page when: 1. Developing a chatbot and needing to add memory/conversation history 2. Implementing techniques to manage or modify chat history 3. Deciding how to handle long-running conversations with an LLM 'This page covers various techniques for adding memory to chatbots, including simply passing previous messages into the prompt, automatically managing chat history using LangGraph persistence, trimming old messages to reduce context window size, and generating summaries of long conversations.' | |
[](https://js.langchain.com/docs/how_to/chatbots_retrieval/): LLM should read this page when building chatbots with retrieval functionality, understanding query transformation for better retrieval, adding context from external data sources to chatbot responses. The page covers creating retrievers to fetch relevant data from sources like documentation, constructing chains that use retrieved data as context for responses, transforming queries to improve retrieval, streaming responses from retrieval-augmented chatbot chains. | |
[](https://js.langchain.com/docs/how_to/chatbots_tools/): LLM should read this page when: (1) building a conversational chatbot with tool integration, (2) adding retrieval capabilities to a chatbot, (3) learning how to manage conversation history in chatbots. This page explains how to create a conversational agent (chatbot) that can interact with other systems and APIs using tools. It provides step-by-step instructions on setting up the required dependencies, instantiating chat models, creating an agent with tools, running the agent, handling conversational responses, and managing chat history. | |
#### Query analysis | |
Query Analysis is the task of using an LLM to generate a query to send to a retriever. | |
[#query-analysis](https://js.langchain.com/docs/tutorials/rag/#query-analysis): LLM should read this page when building a question-answering application over textual data, implementing retrieval-based generation, or integrating query analysis into a retrieval pipeline. This page covers indexing textual data, implementing retrieval and generation steps for a retrieval-augmented generation (RAG) application, incorporating query analysis to optimize retrieval queries, and orchestrating the overall application using LangGraph. | |
[](https://js.langchain.com/docs/how_to/query_few_shot/): LLM should read this page when: 1) Needing guidance on how to add few-shot examples to prompts 2) Wanting to improve query analysis performance by providing the LLM with guiding examples 'This page explains how to add few-shot examples to prompts for query analysis tasks, demonstrating the set up process, defining query schemas, generating queries, and tuning prompts with example inputs and outputs.' | |
[](https://js.langchain.com/docs/how_to/query_no_queries/): LLM should read this page when: 1) Building a question answering system that needs to handle cases where no queries are generated 2) Integrating a retrieval system with query analysis that may not generate queries in some cases This page explains how to handle scenarios where query analysis techniques do not generate any queries, by inspecting the output of the query analysis step and conditionally executing retrieval based on that. | |
[](https://js.langchain.com/docs/how_to/query_multiple_queries/): LLM should read this page when handling queries that may generate multiple retrieval queries, combining results from multiple retrievers, or dealing with retrieval from multiple data sources. This page covers techniques for generating multiple queries from a single input, running all generated queries against a data source, and combining the retrieved results. | |
[](https://js.langchain.com/docs/how_to/query_multiple_retrievers/): LLM should read this page when: 1) Handling queries that require retrieving information from multiple data sources, 2) Building a question answering system that needs to dynamically select the appropriate retriever based on the query, 3) Integrating multiple retrieval mechanisms into a single application. The page explains how to perform query analysis to determine which retriever to use for a given query, and then how to dynamically select and invoke the appropriate retriever based on the query analysis result. | |
[](https://js.langchain.com/docs/how_to/query_constructing_filters/): LLM should read this page when constructing filters for search queries, handling structured data queries, and interfacing with vector databases. Explains how to convert a structured query schema (e.g. Zod) into a format that can be used by LangChain retrievers, covering the use of query translators like ChromaTranslator. | |
[](https://js.langchain.com/docs/how_to/query_high_cardinality/): LLM should read this page when dealing with high cardinality categorical data, constructing queries involving categorical filters, or handling large vocabularies of values. This page covers techniques for handling categorical variables with many unique values, such as adding all values to the prompt, using vector store indexing to find relevant values, and combining these approaches. | |
#### Q&A over SQL + CSV | |
You can use LLMs to do question answering over tabular data. | |
[](https://js.langchain.com/docs/tutorials/sql_qa/): LLM should read this page when building a question/answering system over a SQL database, creating a SQL agent for question answering, handling high-cardinality columns in databases. Provides step-by-step guides and code examples for creating a Q&A system over SQL data using LangChain's chains and agents, with techniques like SQL query generation, error handling, and dealing with high-cardinality columns. | |
[](https://js.langchain.com/docs/how_to/sql_prompting/): LLM should read this page when needing to improve SQL query generation, needing to provide SQL-specific context to a model, or needing to dynamically select relevant examples. This page covers strategies like dialect-specific prompting, including table definitions and sample rows, using few-shot examples, and dynamically selecting relevant examples based on semantic similarity. | |
[](https://js.langchain.com/docs/how_to/sql_query_checking/): LLM should read this page when: 1) Building a SQL query validation system 2) Implementing error handling for SQL queries 3) Optimizing SQL query generation 'Page covers strategies to validate SQL queries generated by LLMs, including using the LLM itself to check for common mistakes and performing query generation and validation in a single model call. It also discusses handling invalid queries.' | |
[](https://js.langchain.com/docs/how_to/sql_large_db/): LLM should read this page when dealing with SQL databases with many tables or high-cardinality columns, or needing to handle queries against large databases. Covers techniques for dynamically including relevant table schemas and feature values in prompts, like retrieving relevant tables using function-calling, grouping tables into categories, and creating vector stores of proper nouns to check spelling. | |
#### Q&A over graph databases | |
You can use an LLM to do question answering over graph databases. | |
[](https://js.langchain.com/docs/tutorials/graph/): LLM should read this page when building a question answering system over graph databases, implementing a semantic layer over a graph database, adding graph querying capabilities to an application. This page provides guidance on creating a Q&A chain over a graph database like Neo4j, including setting up the database, retrieving schema information, and invoking a chain to generate natural language answers from graph queries. | |
[](https://js.langchain.com/docs/how_to/graph_mapping/): LLM should read this page when querying graph databases, mapping user inputs to database values, and generating database queries from natural language. This page explains how to detect entities like people and movies in user input, map those entities to values in a Neo4j database, and generate a Cypher query to answer the user's question based on the mapped entities and database schema. | |
[](https://js.langchain.com/docs/how_to/graph_semantic/): LLM should read this page when: - Needing to interact with a graph database like Neo4j using LLMs - Wanting to create a semantic layer over a database using tools 'The page covers how to set up Neo4j with example data, define custom tools with Cypher templates as a semantic layer, and use an OpenAI LLM agent to interact with the graph database through those tools.' | |
[](https://js.langchain.com/docs/how_to/graph_prompting/): LLM should read this page when trying to improve graph database query generation, incorporating relevant examples into prompts, or dynamically selecting examples based on input similarity. This guide covers prompting strategies like few-shot examples and dynamic example selection via vector similarity to improve results when querying graph databases using natural language. | |
[](https://js.langchain.com/docs/how_to/graph_constructing/): LLM should read this page when constructing knowledge graphs from unstructured text, working with graph databases, using LLMs for information extraction. This page explains how to use LLM Graph Transformer to extract structured information from text and store it in a Neo4j graph database to construct a knowledge graph. | |
[Introduction](https://langchain-ai.github.io/langgraphjs/): LLM should read this page when introducing LangGraph, understanding the LangGraph ecosystem, deploying agents with LangGraph Platform LangGraph is an open-source framework for building controllable AI agents; it integrates with LangChain and can be deployed with the LangGraph Platform; the page covers the ecosystem, deployment options, and additional resources. | |
LangGraph.js is an extension of LangChain aimed at | |
building robust and stateful multi-actor applications with LLMs by modeling steps as edges and nodes in a graph. | |
LangGraph.js documentation is currently hosted on a separate site. | |
[How-to Guides](https://langchain-ai.github.io/langgraphjs/how-tos/): LLM should read this page when looking for how-to guides on using LangGraph, deploying LangGraph applications, or troubleshooting LangGraph errors. Provides detailed how-to guides on installation, LangGraph features (controllability, persistence, memory, human-in-the-loop, time travel, streaming, tool calling, subgraphs, multi-agent, state management), prebuilt ReAct agent, LangGraph Platform (application structure, deployment, assistants, threads, runs, streaming, frontend/UI, human-in-the-loop, double-texting, webhooks, cron jobs, LangGraph Studio), and troubleshooting common errors. | |
[](https://docs.smith.langchain.com/): LLM should read this page when looking to build production-ready LLM applications, when seeking tools for observability and evaluation of LLM applications, or when aiming to iterate on prompts effectively. This page provides an overview of LangSmith, a platform that offers observability features to monitor LLM applications, tools for evaluating application performance and collecting human feedback, and prompt engineering capabilities for iterating on prompts. | |
LangSmith allows you to closely trace, monitor and evaluate your LLM application. | |
It seamlessly integrates with LangChain and LangGraph.js, and you can use it to inspect and debug individual steps of your chains as you build. | |
LangSmith documentation is hosted on a separate site. | |
[](https://docs.smith.langchain.com/how_to_guides/): LLM should read this page when looking to get started with LangSmith, needing an overview of LangSmith's key features, or wanting to integrate LangSmith with LangChain. LangSmith is a platform for building production-grade LLM applications with observability, evaluation, and prompt engineering capabilities, allowing integration with LangChain's open-source frameworks. | |
relevant to LangChain below: | |
#### Evaluation | |
<span data-heading-keywords="evaluation,evaluate"></span> | |
Evaluating performance is a vital part of building LLM-powered applications. | |
LangSmith helps with every step of the process from creating a dataset to defining metrics to running evaluators. | |
[#evaluation](https://docs.smith.langchain.com/how_to_guides/#evaluation): LLM should read this page when getting started with LangSmith, understanding LangSmith's core features, or deciding if LangSmith is a good fit for their LLM application. LangSmith is a platform for building, monitoring, and evaluating production LLM applications, with observability, evaluation, and prompt engineering capabilities. | |
#### Tracing | |
<span data-heading-keywords="trace,tracing"></span> | |
Tracing gives you observability inside your chains and agents, and is vital in diagnosing issues. | |
[](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain/): LLM should read this page when needing to trace and log LangChain invocations with LangSmith, needing to customize trace metadata and properties, or needing to interoperate between LangChain and LangSmith SDK. This page covers integrating LangSmith tracing with LangChain for Python and JavaScript/TypeScript, including installation, quick start, selective tracing, project/metadata configuration, customizing run names/IDs, accessing run IDs, distributed tracing, and interoperability between LangChain and LangSmith SDK. | |
[#add-metadata-and-tags-to-traces](https://docs.smith.langchain.com/how_to_guides/tracing/trace_with_langchain/#add-metadata-and-tags-to-traces): LLM should read this page when tracing LangChain applications with LangSmith, configuring tracing options for LangChain, or integrating LangChain with LangSmith. Provides instructions for logging traces from LangChain in Python and JS/TS, customizing trace metadata and configurations, interoperability between LangChain and LangSmith SDK, and distributed tracing with LangChain. | |
[](https://docs.smith.langchain.com/how_to_guides/tracing/): LLM should read this page when configuring observability and tracing for LLM applications, viewing and interacting with traces, and setting up automations and dashboards. This page provides step-by-step guides for tracing configuration, integrations, advanced tracing options, interacting with traces in the UI and API, creating dashboards, setting up automation rules and online evaluations, and logging user feedback. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment