Skip to content

Instantly share code, notes, and snippets.

@Kjdragan
Last active February 4, 2025 20:31
Show Gist options
  • Save Kjdragan/0b3bd71f334bd85498f7d45d7fc01b25 to your computer and use it in GitHub Desktop.
Save Kjdragan/0b3bd71f334bd85498f7d45d7fc01b25 to your computer and use it in GitHub Desktop.
LANGCHAIN
I'll break down this code and explain how it uses LangChain with Pydantic to create structured outputs from LLM responses.
1. Imports and Setup
python
CopyInsert
from dotenv import load_dotenv
import os
from typing import Optional
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate, ChatPromptTemplate
from langchain_openai import ChatOpenAI
from pydantic import BaseModel, Field, model_validator
load_dotenv(): Loads environment variables from a .env file
The imports bring in necessary components for:
Output parsing (PydanticOutputParser)
Prompt templating (ChatPromptTemplate)
LLM interaction (ChatOpenAI)
Data validation (Pydantic)
2. Model Initialization
python
CopyInsert
chat_model = ChatOpenAI(model='gpt-4o-mini', openai_api_key=os.getenv("OPENAI_API_KEY"))
Initializes the chat model with your specific model choice
Uses the API key from environment variables
3. Data Structure Definition
python
CopyInsert
class Joke(BaseModel):
setup: str = Field(description="setup of the joke")
punchline: str = Field(description="punchline or answer to the joke")
explanation: Optional[str] = Field(description="explanation of why the joke is funny")
@model_validator(mode='after')
def question_ends_with_question_mark(self) -> 'Joke':
if not self.setup.endswith('?'):
self.setup = self.setup + '?'
return self
Defines a Pydantic model Joke that specifies the expected structure of the output
Each field has a description that helps guide the LLM
The validator ensures the setup ends with a question mark
4. Parser Setup
python
CopyInsert
parser = PydanticOutputParser(pydantic_object=Joke)
Creates a parser that will enforce the Joke structure on the LLM's output
5. Prompt Template
python
CopyInsert
prompt = ChatPromptTemplate.from_template(
"Tell me a joke.\n{format_instructions}\n{query}"
)
Creates a template for the prompt
{format_instructions} will be filled with instructions for the LLM about the required output format
{query} is the actual request
6. Message Formatting
python
CopyInsert
messages = prompt.format_messages(
query="Tell me a joke",
format_instructions=parser.get_format_instructions()
)
Formats the prompt with:
The actual query
Format instructions generated by the parser
7. Model Invocation and Parsing
python
CopyInsert
response = chat_model.invoke(messages)
parsed_joke = parser.invoke(response.content)
print(parsed_joke)
Sends the formatted messages to the LLM
Parses the response into the Joke structure
Prints the structured output
Example Output
When you run this code, you'll get a structured output like:
python
CopyInsert
Joke(
setup="Why did the scarecrow win an award?",
punchline="Because he was outstanding in his field!",
explanation="The joke plays on the double meaning of 'outstanding' - both as exceptional and literally standing out in a field, which is where scarecrows are typically found."
)
Key Concepts
Structured Output: Using Pydantic to define exactly what we expect from the LLM
Output Parsing: Converting free-form LLM responses into structured data
Validation: Ensuring the output meets our requirements
Prompt Engineering: Using templates and format instructions to guide the LLM
This pattern is particularly useful when you need consistent, structured outputs from an LLM that you can reliably process in your application.
Display the source blob
Display the rendered blob
Raw
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
‎‎​
import os
from dotenv import load_dotenv
load_dotenv()
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_mistralai.chat_models import ChatMistralAI
# If mistral_api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.
chat = ChatMistralAI(
model="mistral-small-2501",
temperature=0.7,
max_tokens=128,
)
# If mistral_api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.
chat = ChatMistralAI(
model="mistral-small-2501",
temperature=0.7,
max_tokens=128,
)
messages = [HumanMessage(content='Where does "hello world" come from?')]
response = chat.invoke(messages)
print(response.content)
import os
from dotenv import load_dotenv
load_dotenv()
MISTRAL_API_KEY = os.getenv("MISTRAL_API_KEY")
import os
from dotenv import load_dotenv
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_mistralai.chat_models import ChatMistralAI
# If mistral_api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.
chat = ChatMistralAI(
model="mistral-small-2501",
temperature=0.7,
max_tokens=128,
)
# If mistral_api_key is not passed, default behavior is to use the `MISTRAL_API_KEY` environment variable.
chat = ChatMistralAI(
model="mistral-small-2501",
temperature=0.7,
max_tokens=128,
)
messages = [HumanMessage(content='Where does "hello world" come from?')]
response = chat.invoke(messages)
print(response.content)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment