Skip to content

Instantly share code, notes, and snippets.

@razhangwei
Last active September 7, 2024 06:14
Show Gist options
  • Save razhangwei/d0abcea70ad33c6ef0317ba4707c8365 to your computer and use it in GitHub Desktop.
Save razhangwei/d0abcea70ad33c6ef0317ba4707c8365 to your computer and use it in GitHub Desktop.
#llm Structured Output and JSON mode
  • quite powerful as it simplies the writing the system prompt

  • often need an extra step of post processing, which can be done by LLM or traditional programs.

  • structured output vs json mode:

    • structured output: 100% gurantee schema; schema doesn't consume tokens
    • json: scheme usually goes to system prompts, costing tokens, doesn't guarantee json or scheme correctness.
from pydantic import BaseModel
from openai import OpenAI
class Recipe(BaseModel):
ingredients: list[str]
instructions: str
client = OpenAI()
completion = client.beta.chat.completions.parse(
model="gpt-4o-2024-08-06",
messages=[{"role": "user", "content": "Write an apple pie recipe"}],
response_format=Recipe
)
apple_pie_recipe = Recipe(**json.loads(completion.choices[0].message.content))
import anthropic
client = anthropic.Client(api_key="your_api_key_here")
# Define your JSON schema
json_schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"},
"hobbies": {
"type": "array",
"items": {"type": "string"}
}
},
"required": ["name", "age", "hobbies"]
}
# Convert the schema to a string representation
schema_str = str(json_schema)
# Create the prompt with the schema
prompt = f"""
Generate a JSON object that adheres to the following schema:
{schema_str}
Ensure that the output is a valid JSON object matching this schema.
"""
# Make the API call
response = client.messages.create(
model="claude-3-opus-20240229",
max_tokens=1000,
messages=[
{"role": "user", "content": prompt}
],
system="Always respond in valid JSON format.",
)
print(response.content)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment