Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save Cacodaimon/b2426e0d73eaa916c54a362adebaf0c5 to your computer and use it in GitHub Desktop.
Save Cacodaimon/b2426e0d73eaa916c54a362adebaf0c5 to your computer and use it in GitHub Desktop.
OpenAI OpenAPI YAML fixed for building a Java client
This file has been truncated, but you can view the full file.
openapi: 3.0.0
info:
title: OpenAI API
description: The OpenAI REST API. Please see
https://platform.openai.com/docs/api-reference for more details.
version: 2.3.0
termsOfService: https://openai.com/policies/terms-of-use
contact:
name: OpenAI Support
url: https://help.openai.com/
license:
name: MIT
url: https://github.com/openai/openai-openapi/blob/master/LICENSE
servers:
- url: https://api.openai.com/v1
security:
- ApiKeyAuth: []
tags:
- name: Assistants
description: Build Assistants that can call models and use tools.
- name: Audio
description: Turn audio into text or text into audio.
- name: Chat
description: Given a list of messages comprising a conversation, the model will
return a response.
- name: Completions
description: Given a prompt, the model will return one or more predicted
completions, and can also return the probabilities of alternative tokens
at each position.
- name: Embeddings
description: Get a vector representation of a given input that can be easily
consumed by machine learning models and algorithms.
- name: Evals
description: Manage and run evals in the OpenAI platform.
- name: Fine-tuning
description: Manage fine-tuning jobs to tailor a model to your specific training data.
- name: Batch
description: Create large batches of API requests to run asynchronously.
- name: Files
description: Files are used to upload documents that can be used with features
like Assistants and Fine-tuning.
- name: Uploads
description: Use Uploads to upload large files in multiple parts.
- name: Images
description: Given a prompt and/or an input image, the model will generate a new image.
- name: Models
description: List and describe the various models available in the API.
- name: Moderations
description: Given text and/or image inputs, classifies if those inputs are
potentially harmful.
- name: Audit Logs
description: List user actions and configuration changes within this organization.
paths:
/assistants:
get:
operationId: listAssistants
tags:
- Assistants
summary: Returns a list of assistants.
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListAssistantsResponse"
x-oaiMeta:
name: List assistants
group: assistants
beta: true
returns: A list of [assistant](/docs/api-reference/assistants/object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/assistants?order=desc&limit=20" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
my_assistants = client.beta.assistants.list(
order="desc",
limit="20",
)
print(my_assistants.data)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myAssistants = await openai.beta.assistants.list({
order: "desc",
limit: "20",
});
console.log(myAssistants.data);
}
main();
response: >
{
"object": "list",
"data": [
{
"id": "asst_abc123",
"object": "assistant",
"created_at": 1698982736,
"name": "Coding Tutor",
"description": null,
"model": "gpt-4o",
"instructions": "You are a helpful assistant designed to make me better at coding!",
"tools": [],
"tool_resources": {},
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
},
{
"id": "asst_abc456",
"object": "assistant",
"created_at": 1698982718,
"name": "My Assistant",
"description": null,
"model": "gpt-4o",
"instructions": "You are a helpful assistant designed to make me better at coding!",
"tools": [],
"tool_resources": {},
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
},
{
"id": "asst_abc789",
"object": "assistant",
"created_at": 1698982643,
"name": null,
"description": null,
"model": "gpt-4o",
"instructions": null,
"tools": [],
"tool_resources": {},
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
}
],
"first_id": "asst_abc123",
"last_id": "asst_abc789",
"has_more": false
}
post:
operationId: createAssistant
tags:
- Assistants
summary: Create an assistant with a model and instructions.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateAssistantRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/AssistantObject"
x-oaiMeta:
name: Create assistant
group: assistants
beta: true
returns: An [assistant](/docs/api-reference/assistants/object) object.
examples:
- title: Code Interpreter
request:
curl: >
curl "https://api.openai.com/v1/assistants" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
"name": "Math Tutor",
"tools": [{"type": "code_interpreter"}],
"model": "gpt-4o"
}'
python: >
from openai import OpenAI
client = OpenAI()
my_assistant = client.beta.assistants.create(
instructions="You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
name="Math Tutor",
tools=[{"type": "code_interpreter"}],
model="gpt-4o",
)
print(my_assistant)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myAssistant = await openai.beta.assistants.create({
instructions:
"You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
name: "Math Tutor",
tools: [{ type: "code_interpreter" }],
model: "gpt-4o",
});
console.log(myAssistant);
}
main();
response: >
{
"id": "asst_abc123",
"object": "assistant",
"created_at": 1698984975,
"name": "Math Tutor",
"description": null,
"model": "gpt-4o",
"instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
"tools": [
{
"type": "code_interpreter"
}
],
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
}
- title: Files
request:
curl: >
curl https://api.openai.com/v1/assistants \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
"tools": [{"type": "file_search"}],
"tool_resources": {"file_search": {"vector_store_ids": ["vs_123"]}},
"model": "gpt-4o"
}'
python: >
from openai import OpenAI
client = OpenAI()
my_assistant = client.beta.assistants.create(
instructions="You are an HR bot, and you have access to files to answer employee questions about company policies.",
name="HR Helper",
tools=[{"type": "file_search"}],
tool_resources={"file_search": {"vector_store_ids": ["vs_123"]}},
model="gpt-4o"
)
print(my_assistant)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myAssistant = await openai.beta.assistants.create({
instructions:
"You are an HR bot, and you have access to files to answer employee questions about company policies.",
name: "HR Helper",
tools: [{ type: "file_search" }],
tool_resources: {
file_search: {
vector_store_ids: ["vs_123"]
}
},
model: "gpt-4o"
});
console.log(myAssistant);
}
main();
response: >
{
"id": "asst_abc123",
"object": "assistant",
"created_at": 1699009403,
"name": "HR Helper",
"description": null,
"model": "gpt-4o",
"instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
"tools": [
{
"type": "file_search"
}
],
"tool_resources": {
"file_search": {
"vector_store_ids": ["vs_123"]
}
},
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
}
/assistants/{assistant_id}:
get:
operationId: getAssistant
tags:
- Assistants
summary: Retrieves an assistant.
parameters:
- in: path
name: assistant_id
required: true
schema:
type: string
description: The ID of the assistant to retrieve.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/AssistantObject"
x-oaiMeta:
name: Retrieve assistant
group: assistants
beta: true
returns: The [assistant](/docs/api-reference/assistants/object) object matching
the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/assistants/asst_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
my_assistant = client.beta.assistants.retrieve("asst_abc123")
print(my_assistant)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myAssistant = await openai.beta.assistants.retrieve(
"asst_abc123"
);
console.log(myAssistant);
}
main();
response: >
{
"id": "asst_abc123",
"object": "assistant",
"created_at": 1699009709,
"name": "HR Helper",
"description": null,
"model": "gpt-4o",
"instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies.",
"tools": [
{
"type": "file_search"
}
],
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
}
post:
operationId: modifyAssistant
tags:
- Assistants
summary: Modifies an assistant.
parameters:
- in: path
name: assistant_id
required: true
schema:
type: string
description: The ID of the assistant to modify.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ModifyAssistantRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/AssistantObject"
x-oaiMeta:
name: Modify assistant
group: assistants
beta: true
returns: The modified [assistant](/docs/api-reference/assistants/object) object.
examples:
request:
curl: >
curl https://api.openai.com/v1/assistants/asst_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
"tools": [{"type": "file_search"}],
"model": "gpt-4o"
}'
python: >
from openai import OpenAI
client = OpenAI()
my_updated_assistant = client.beta.assistants.update(
"asst_abc123",
instructions="You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
name="HR Helper",
tools=[{"type": "file_search"}],
model="gpt-4o"
)
print(my_updated_assistant)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myUpdatedAssistant = await openai.beta.assistants.update(
"asst_abc123",
{
instructions:
"You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
name: "HR Helper",
tools: [{ type: "file_search" }],
model: "gpt-4o"
}
);
console.log(myUpdatedAssistant);
}
main();
response: >
{
"id": "asst_123",
"object": "assistant",
"created_at": 1699009709,
"name": "HR Helper",
"description": null,
"model": "gpt-4o",
"instructions": "You are an HR bot, and you have access to files to answer employee questions about company policies. Always response with info from either of the files.",
"tools": [
{
"type": "file_search"
}
],
"tool_resources": {
"file_search": {
"vector_store_ids": []
}
},
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
}
delete:
operationId: deleteAssistant
tags:
- Assistants
summary: Delete an assistant.
parameters:
- in: path
name: assistant_id
required: true
schema:
type: string
description: The ID of the assistant to delete.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteAssistantResponse"
x-oaiMeta:
name: Delete assistant
group: assistants
beta: true
returns: Deletion status
examples:
request:
curl: |
curl https://api.openai.com/v1/assistants/asst_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-X DELETE
python: |
from openai import OpenAI
client = OpenAI()
response = client.beta.assistants.delete("asst_abc123")
print(response)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const response = await openai.beta.assistants.del("asst_abc123");
console.log(response);
}
main();
response: |
{
"id": "asst_abc123",
"object": "assistant.deleted",
"deleted": true
}
/audio/speech:
post:
operationId: createSpeech
tags:
- Audio
summary: Generates audio from the input text.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateSpeechRequest"
responses:
"200":
description: OK
headers:
Transfer-Encoding:
schema:
type: string
description: chunked
content:
application/octet-stream:
schema:
type: string
format: binary
x-oaiMeta:
name: Create speech
group: audio
returns: The audio file content.
examples:
request:
curl: |
curl https://api.openai.com/v1/audio/speech \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini-tts",
"input": "The quick brown fox jumped over the lazy dog.",
"voice": "alloy"
}' \
--output speech.mp3
python: |
from pathlib import Path
import openai
speech_file_path = Path(__file__).parent / "speech.mp3"
with openai.audio.speech.with_streaming_response.create(
model="gpt-4o-mini-tts",
voice="alloy",
input="The quick brown fox jumped over the lazy dog."
) as response:
response.stream_to_file(speech_file_path)
javascript: >
import fs from "fs";
import path from "path";
import OpenAI from "openai";
const openai = new OpenAI();
const speechFile = path.resolve("./speech.mp3");
async function main() {
const mp3 = await openai.audio.speech.create({
model: "gpt-4o-mini-tts",
voice: "alloy",
input: "Today is a wonderful day to build something people love!",
});
console.log(speechFile);
const buffer = Buffer.from(await mp3.arrayBuffer());
await fs.promises.writeFile(speechFile, buffer);
}
main();
csharp: |
using System;
using System.IO;
using OpenAI.Audio;
AudioClient client = new(
model: "gpt-4o-mini-tts",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
BinaryData speech = client.GenerateSpeech(
text: "The quick brown fox jumped over the lazy dog.",
voice: GeneratedSpeechVoice.Alloy
);
using FileStream stream = File.OpenWrite("speech.mp3");
speech.ToStream().CopyTo(stream);
/audio/transcriptions:
post:
operationId: createTranscription
tags:
- Audio
summary: Transcribes audio into the input language.
requestBody:
required: true
content:
multipart/form-data:
schema:
$ref: "#/components/schemas/CreateTranscriptionRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
oneOf:
- $ref: "#/components/schemas/CreateTranscriptionResponseJson"
- $ref: "#/components/schemas/CreateTranscriptionResponseVerboseJson"
text/event-stream:
schema:
$ref: "#/components/schemas/CreateTranscriptionResponseStreamEvent"
x-oaiMeta:
name: Create transcription
group: audio
returns: The [transcription object](/docs/api-reference/audio/json-object), a
[verbose transcription
object](/docs/api-reference/audio/verbose-json-object) or a [stream of
transcript
events](/docs/api-reference/audio/transcript-text-delta-event).
examples:
- title: Default
request:
curl: |
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F model="gpt-4o-transcribe"
python: |
from openai import OpenAI
client = OpenAI()
audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
model="gpt-4o-transcribe",
file=audio_file
)
javascript: >
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const transcription = await openai.audio.transcriptions.create({
file: fs.createReadStream("audio.mp3"),
model: "gpt-4o-transcribe",
});
console.log(transcription.text);
}
main();
csharp: >
using System;
using OpenAI.Audio;
string audioFilePath = "audio.mp3";
AudioClient client = new(
model: "gpt-4o-transcribe",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
AudioTranscription transcription =
client.TranscribeAudio(audioFilePath);
Console.WriteLine($"{transcription.Text}");
response: >
{
"text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F model="gpt-4o-mini-transcribe" \
-F stream=true
python: |
from openai import OpenAI
client = OpenAI()
audio_file = open("speech.mp3", "rb")
stream = client.audio.transcriptions.create(
file=audio_file,
model="gpt-4o-mini-transcribe",
stream=True
)
for event in stream:
print(event)
javascript: |
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
const stream = await openai.audio.transcriptions.create({
file: fs.createReadStream("audio.mp3"),
model: "gpt-4o-mini-transcribe",
stream: true,
});
for await (const event of stream) {
console.log(event);
}
response: |
data: {"type":"transcript.text.delta","delta":"I","logprobs":[{"token":"I","logprob":-0.00007588794,"bytes":[73]}]}
data: {"type":"transcript.text.delta","delta":" see","logprobs":[{"token":" see","logprob":-3.1281633e-7,"bytes":[32,115,101,101]}]}
data: {"type":"transcript.text.delta","delta":" skies","logprobs":[{"token":" skies","logprob":-2.3392786e-6,"bytes":[32,115,107,105,101,115]}]}
data: {"type":"transcript.text.delta","delta":" of","logprobs":[{"token":" of","logprob":-3.1281633e-7,"bytes":[32,111,102]}]}
data: {"type":"transcript.text.delta","delta":" blue","logprobs":[{"token":" blue","logprob":-1.0280384e-6,"bytes":[32,98,108,117,101]}]}
data: {"type":"transcript.text.delta","delta":" and","logprobs":[{"token":" and","logprob":-0.0005108566,"bytes":[32,97,110,100]}]}
data: {"type":"transcript.text.delta","delta":" clouds","logprobs":[{"token":" clouds","logprob":-1.9361265e-7,"bytes":[32,99,108,111,117,100,115]}]}
data: {"type":"transcript.text.delta","delta":" of","logprobs":[{"token":" of","logprob":-1.9361265e-7,"bytes":[32,111,102]}]}
data: {"type":"transcript.text.delta","delta":" white","logprobs":[{"token":" white","logprob":-7.89631e-7,"bytes":[32,119,104,105,116,101]}]}
data: {"type":"transcript.text.delta","delta":",","logprobs":[{"token":",","logprob":-0.0014890312,"bytes":[44]}]}
data: {"type":"transcript.text.delta","delta":" the","logprobs":[{"token":" the","logprob":-0.0110956915,"bytes":[32,116,104,101]}]}
data: {"type":"transcript.text.delta","delta":" bright","logprobs":[{"token":" bright","logprob":0.0,"bytes":[32,98,114,105,103,104,116]}]}
data: {"type":"transcript.text.delta","delta":" blessed","logprobs":[{"token":" blessed","logprob":-0.000045848617,"bytes":[32,98,108,101,115,115,101,100]}]}
data: {"type":"transcript.text.delta","delta":" days","logprobs":[{"token":" days","logprob":-0.000010802739,"bytes":[32,100,97,121,115]}]}
data: {"type":"transcript.text.delta","delta":",","logprobs":[{"token":",","logprob":-0.00001700133,"bytes":[44]}]}
data: {"type":"transcript.text.delta","delta":" the","logprobs":[{"token":" the","logprob":-0.0000118755715,"bytes":[32,116,104,101]}]}
data: {"type":"transcript.text.delta","delta":" dark","logprobs":[{"token":" dark","logprob":-5.5122365e-7,"bytes":[32,100,97,114,107]}]}
data: {"type":"transcript.text.delta","delta":" sacred","logprobs":[{"token":" sacred","logprob":-5.4385737e-6,"bytes":[32,115,97,99,114,101,100]}]}
data: {"type":"transcript.text.delta","delta":" nights","logprobs":[{"token":" nights","logprob":-4.00813e-6,"bytes":[32,110,105,103,104,116,115]}]}
data: {"type":"transcript.text.delta","delta":",","logprobs":[{"token":",","logprob":-0.0036910512,"bytes":[44]}]}
data: {"type":"transcript.text.delta","delta":" and","logprobs":[{"token":" and","logprob":-0.0031903093,"bytes":[32,97,110,100]}]}
data: {"type":"transcript.text.delta","delta":" I","logprobs":[{"token":" I","logprob":-1.504853e-6,"bytes":[32,73]}]}
data: {"type":"transcript.text.delta","delta":" think","logprobs":[{"token":" think","logprob":-4.3202e-7,"bytes":[32,116,104,105,110,107]}]}
data: {"type":"transcript.text.delta","delta":" to","logprobs":[{"token":" to","logprob":-1.9361265e-7,"bytes":[32,116,111]}]}
data: {"type":"transcript.text.delta","delta":" myself","logprobs":[{"token":" myself","logprob":-1.7432603e-6,"bytes":[32,109,121,115,101,108,102]}]}
data: {"type":"transcript.text.delta","delta":",","logprobs":[{"token":",","logprob":-0.29254505,"bytes":[44]}]}
data: {"type":"transcript.text.delta","delta":" what","logprobs":[{"token":" what","logprob":-0.016815351,"bytes":[32,119,104,97,116]}]}
data: {"type":"transcript.text.delta","delta":" a","logprobs":[{"token":" a","logprob":-3.1281633e-7,"bytes":[32,97]}]}
data: {"type":"transcript.text.delta","delta":" wonderful","logprobs":[{"token":" wonderful","logprob":-2.1008714e-6,"bytes":[32,119,111,110,100,101,114,102,117,108]}]}
data: {"type":"transcript.text.delta","delta":" world","logprobs":[{"token":" world","logprob":-8.180258e-6,"bytes":[32,119,111,114,108,100]}]}
data: {"type":"transcript.text.delta","delta":".","logprobs":[{"token":".","logprob":-0.014231676,"bytes":[46]}]}
data: {"type":"transcript.text.done","text":"I see skies of blue and clouds of white, the bright blessed days, the dark sacred nights, and I think to myself, what a wonderful world.","logprobs":[{"token":"I","logprob":-0.00007588794,"bytes":[73]},{"token":" see","logprob":-3.1281633e-7,"bytes":[32,115,101,101]},{"token":" skies","logprob":-2.3392786e-6,"bytes":[32,115,107,105,101,115]},{"token":" of","logprob":-3.1281633e-7,"bytes":[32,111,102]},{"token":" blue","logprob":-1.0280384e-6,"bytes":[32,98,108,117,101]},{"token":" and","logprob":-0.0005108566,"bytes":[32,97,110,100]},{"token":" clouds","logprob":-1.9361265e-7,"bytes":[32,99,108,111,117,100,115]},{"token":" of","logprob":-1.9361265e-7,"bytes":[32,111,102]},{"token":" white","logprob":-7.89631e-7,"bytes":[32,119,104,105,116,101]},{"token":",","logprob":-0.0014890312,"bytes":[44]},{"token":" the","logprob":-0.0110956915,"bytes":[32,116,104,101]},{"token":" bright","logprob":0.0,"bytes":[32,98,114,105,103,104,116]},{"token":" blessed","logprob":-0.000045848617,"bytes":[32,98,108,101,115,115,101,100]},{"token":" days","logprob":-0.000010802739,"bytes":[32,100,97,121,115]},{"token":",","logprob":-0.00001700133,"bytes":[44]},{"token":" the","logprob":-0.0000118755715,"bytes":[32,116,104,101]},{"token":" dark","logprob":-5.5122365e-7,"bytes":[32,100,97,114,107]},{"token":" sacred","logprob":-5.4385737e-6,"bytes":[32,115,97,99,114,101,100]},{"token":" nights","logprob":-4.00813e-6,"bytes":[32,110,105,103,104,116,115]},{"token":",","logprob":-0.0036910512,"bytes":[44]},{"token":" and","logprob":-0.0031903093,"bytes":[32,97,110,100]},{"token":" I","logprob":-1.504853e-6,"bytes":[32,73]},{"token":" think","logprob":-4.3202e-7,"bytes":[32,116,104,105,110,107]},{"token":" to","logprob":-1.9361265e-7,"bytes":[32,116,111]},{"token":" myself","logprob":-1.7432603e-6,"bytes":[32,109,121,115,101,108,102]},{"token":",","logprob":-0.29254505,"bytes":[44]},{"token":" what","logprob":-0.016815351,"bytes":[32,119,104,97,116]},{"token":" a","logprob":-3.1281633e-7,"bytes":[32,97]},{"token":" wonderful","logprob":-2.1008714e-6,"bytes":[32,119,111,110,100,101,114,102,117,108]},{"token":" world","logprob":-8.180258e-6,"bytes":[32,119,111,114,108,100]},{"token":".","logprob":-0.014231676,"bytes":[46]}]}
- title: Logprobs
request:
curl: |
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F "include[]=logprobs" \
-F model="gpt-4o-transcribe" \
-F response_format="json"
python: |
from openai import OpenAI
client = OpenAI()
audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
file=audio_file,
model="gpt-4o-transcribe",
response_format="json",
include=["logprobs"]
)
print(transcript)
javascript: >
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const transcription = await openai.audio.transcriptions.create({
file: fs.createReadStream("audio.mp3"),
model: "gpt-4o-transcribe",
response_format: "json",
include: ["logprobs"]
});
console.log(transcription);
}
main();
response: >
{
"text": "Hey, my knee is hurting and I want to see the doctor tomorrow ideally.",
"logprobs": [
{ "token": "Hey", "logprob": -1.0415299, "bytes": [72, 101, 121] },
{ "token": ",", "logprob": -9.805982e-5, "bytes": [44] },
{ "token": " my", "logprob": -0.00229799, "bytes": [32, 109, 121] },
{
"token": " knee",
"logprob": -4.7159858e-5,
"bytes": [32, 107, 110, 101, 101]
},
{ "token": " is", "logprob": -0.043909557, "bytes": [32, 105, 115] },
{
"token": " hurting",
"logprob": -1.1041146e-5,
"bytes": [32, 104, 117, 114, 116, 105, 110, 103]
},
{ "token": " and", "logprob": -0.011076359, "bytes": [32, 97, 110, 100] },
{ "token": " I", "logprob": -5.3193703e-6, "bytes": [32, 73] },
{
"token": " want",
"logprob": -0.0017156356,
"bytes": [32, 119, 97, 110, 116]
},
{ "token": " to", "logprob": -7.89631e-7, "bytes": [32, 116, 111] },
{ "token": " see", "logprob": -5.5122365e-7, "bytes": [32, 115, 101, 101] },
{ "token": " the", "logprob": -0.0040786397, "bytes": [32, 116, 104, 101] },
{
"token": " doctor",
"logprob": -2.3392786e-6,
"bytes": [32, 100, 111, 99, 116, 111, 114]
},
{
"token": " tomorrow",
"logprob": -7.89631e-7,
"bytes": [32, 116, 111, 109, 111, 114, 114, 111, 119]
},
{
"token": " ideally",
"logprob": -0.5800861,
"bytes": [32, 105, 100, 101, 97, 108, 108, 121]
},
{ "token": ".", "logprob": -0.00011093382, "bytes": [46] }
]
}
- title: Word timestamps
request:
curl: |
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F "timestamp_granularities[]=word" \
-F model="whisper-1" \
-F response_format="verbose_json"
python: |
from openai import OpenAI
client = OpenAI()
audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
file=audio_file,
model="whisper-1",
response_format="verbose_json",
timestamp_granularities=["word"]
)
print(transcript.words)
javascript: >
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const transcription = await openai.audio.transcriptions.create({
file: fs.createReadStream("audio.mp3"),
model: "whisper-1",
response_format: "verbose_json",
timestamp_granularities: ["word"]
});
console.log(transcription.text);
}
main();
csharp: >
using System;
using OpenAI.Audio;
string audioFilePath = "audio.mp3";
AudioClient client = new(
model: "whisper-1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
AudioTranscriptionOptions options = new()
{
ResponseFormat = AudioTranscriptionFormat.Verbose,
TimestampGranularities = AudioTimestampGranularities.Word,
};
AudioTranscription transcription =
client.TranscribeAudio(audioFilePath, options);
Console.WriteLine($"{transcription.Text}");
response: >
{
"task": "transcribe",
"language": "english",
"duration": 8.470000267028809,
"text": "The beach was a popular spot on a hot summer day. People were swimming in the ocean, building sandcastles, and playing beach volleyball.",
"words": [
{
"word": "The",
"start": 0.0,
"end": 0.23999999463558197
},
...
{
"word": "volleyball",
"start": 7.400000095367432,
"end": 7.900000095367432
}
]
}
- title: Segment timestamps
request:
curl: |
curl https://api.openai.com/v1/audio/transcriptions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/audio.mp3" \
-F "timestamp_granularities[]=segment" \
-F model="whisper-1" \
-F response_format="verbose_json"
python: |
from openai import OpenAI
client = OpenAI()
audio_file = open("speech.mp3", "rb")
transcript = client.audio.transcriptions.create(
file=audio_file,
model="whisper-1",
response_format="verbose_json",
timestamp_granularities=["segment"]
)
print(transcript.words)
javascript: >
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const transcription = await openai.audio.transcriptions.create({
file: fs.createReadStream("audio.mp3"),
model: "whisper-1",
response_format: "verbose_json",
timestamp_granularities: ["segment"]
});
console.log(transcription.text);
}
main();
csharp: >
using System;
using OpenAI.Audio;
string audioFilePath = "audio.mp3";
AudioClient client = new(
model: "whisper-1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
AudioTranscriptionOptions options = new()
{
ResponseFormat = AudioTranscriptionFormat.Verbose,
TimestampGranularities = AudioTimestampGranularities.Segment,
};
AudioTranscription transcription =
client.TranscribeAudio(audioFilePath, options);
Console.WriteLine($"{transcription.Text}");
response: >
{
"task": "transcribe",
"language": "english",
"duration": 8.470000267028809,
"text": "The beach was a popular spot on a hot summer day. People were swimming in the ocean, building sandcastles, and playing beach volleyball.",
"segments": [
{
"id": 0,
"seek": 0,
"start": 0.0,
"end": 3.319999933242798,
"text": " The beach was a popular spot on a hot summer day.",
"tokens": [
50364, 440, 7534, 390, 257, 3743, 4008, 322, 257, 2368, 4266, 786, 13, 50530
],
"temperature": 0.0,
"avg_logprob": -0.2860786020755768,
"compression_ratio": 1.2363636493682861,
"no_speech_prob": 0.00985979475080967
},
...
]
}
/audio/translations:
post:
operationId: createTranslation
tags:
- Audio
summary: Translates audio into English.
requestBody:
required: true
content:
multipart/form-data:
schema:
$ref: "#/components/schemas/CreateTranslationRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
oneOf:
- $ref: "#/components/schemas/CreateTranslationResponseJson"
- $ref: "#/components/schemas/CreateTranslationResponseVerboseJson"
x-oaiMeta:
name: Create translation
group: audio
returns: The translated text.
examples:
request:
curl: |
curl https://api.openai.com/v1/audio/translations \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: multipart/form-data" \
-F file="@/path/to/file/german.m4a" \
-F model="whisper-1"
python: |
from openai import OpenAI
client = OpenAI()
audio_file = open("speech.mp3", "rb")
transcript = client.audio.translations.create(
model="whisper-1",
file=audio_file
)
javascript: |
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const translation = await openai.audio.translations.create({
file: fs.createReadStream("speech.mp3"),
model: "whisper-1",
});
console.log(translation.text);
}
main();
csharp: >
using System;
using OpenAI.Audio;
string audioFilePath = "audio.mp3";
AudioClient client = new(
model: "whisper-1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
AudioTranscription transcription =
client.TranscribeAudio(audioFilePath);
Console.WriteLine($"{transcription.Text}");
response: >
{
"text": "Hello, my name is Wolfgang and I come from Germany. Where are you heading today?"
}
/batches:
post:
summary: Creates and executes a batch from an uploaded file of requests
operationId: createBatch
tags:
- Batch
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- input_file_id
- endpoint
- completion_window
properties:
input_file_id:
type: string
description: >
The ID of an uploaded file that contains requests for the
new batch.
See [upload file](/docs/api-reference/files/create) for how
to upload a file.
Your input file must be formatted as a [JSONL
file](/docs/api-reference/batch/request-input), and must be
uploaded with the purpose `batch`. The file can contain up
to 50,000 requests, and can be up to 200 MB in size.
endpoint:
type: string
enum:
- /v1/responses
- /v1/chat/completions
- /v1/embeddings
- /v1/completions
description: The endpoint to be used for all requests in the batch. Currently
`/v1/responses`, `/v1/chat/completions`, `/v1/embeddings`,
and `/v1/completions` are supported. Note that
`/v1/embeddings` batches are also restricted to a maximum of
50,000 embedding inputs across all requests in the batch.
completion_window:
type: string
enum:
- 24h
description: The time frame within which the batch should be processed.
Currently only `24h` is supported.
metadata:
$ref: "#/components/schemas/Metadata"
responses:
"200":
description: Batch created successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Batch"
x-oaiMeta:
name: Create batch
group: batch
returns: The created [Batch](/docs/api-reference/batch/object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/batches \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input_file_id": "file-abc123",
"endpoint": "/v1/chat/completions",
"completion_window": "24h"
}'
python: |
from openai import OpenAI
client = OpenAI()
client.batches.create(
input_file_id="file-abc123",
endpoint="/v1/chat/completions",
completion_window="24h"
)
node: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const batch = await openai.batches.create({
input_file_id: "file-abc123",
endpoint: "/v1/chat/completions",
completion_window: "24h"
});
console.log(batch);
}
main();
response: |
{
"id": "batch_abc123",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file-abc123",
"completion_window": "24h",
"status": "validating",
"output_file_id": null,
"error_file_id": null,
"created_at": 1711471533,
"in_progress_at": null,
"expires_at": null,
"finalizing_at": null,
"completed_at": null,
"failed_at": null,
"expired_at": null,
"cancelling_at": null,
"cancelled_at": null,
"request_counts": {
"total": 0,
"completed": 0,
"failed": 0
},
"metadata": {
"customer_id": "user_123456789",
"batch_description": "Nightly eval job",
}
}
get:
operationId: listBatches
tags:
- Batch
summary: List your organization's batches.
parameters:
- in: query
name: after
required: false
schema:
type: string
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
responses:
"200":
description: Batch listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListBatchesResponse"
x-oaiMeta:
name: List batch
group: batch
returns: A list of paginated [Batch](/docs/api-reference/batch/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/batches?limit=2 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
python: |
from openai import OpenAI
client = OpenAI()
client.batches.list()
node: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const list = await openai.batches.list();
for await (const batch of list) {
console.log(batch);
}
}
main();
response: |
{
"object": "list",
"data": [
{
"id": "batch_abc123",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file-abc123",
"completion_window": "24h",
"status": "completed",
"output_file_id": "file-cvaTdG",
"error_file_id": "file-HOWS94",
"created_at": 1711471533,
"in_progress_at": 1711471538,
"expires_at": 1711557933,
"finalizing_at": 1711493133,
"completed_at": 1711493163,
"failed_at": null,
"expired_at": null,
"cancelling_at": null,
"cancelled_at": null,
"request_counts": {
"total": 100,
"completed": 95,
"failed": 5
},
"metadata": {
"customer_id": "user_123456789",
"batch_description": "Nightly job",
}
},
{ ... },
],
"first_id": "batch_abc123",
"last_id": "batch_abc456",
"has_more": true
}
/batches/{batch_id}:
get:
operationId: retrieveBatch
tags:
- Batch
summary: Retrieves a batch.
parameters:
- in: path
name: batch_id
required: true
schema:
type: string
description: The ID of the batch to retrieve.
responses:
"200":
description: Batch retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Batch"
x-oaiMeta:
name: Retrieve batch
group: batch
returns: The [Batch](/docs/api-reference/batch/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/batches/batch_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
python: |
from openai import OpenAI
client = OpenAI()
client.batches.retrieve("batch_abc123")
node: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const batch = await openai.batches.retrieve("batch_abc123");
console.log(batch);
}
main();
response: |
{
"id": "batch_abc123",
"object": "batch",
"endpoint": "/v1/completions",
"errors": null,
"input_file_id": "file-abc123",
"completion_window": "24h",
"status": "completed",
"output_file_id": "file-cvaTdG",
"error_file_id": "file-HOWS94",
"created_at": 1711471533,
"in_progress_at": 1711471538,
"expires_at": 1711557933,
"finalizing_at": 1711493133,
"completed_at": 1711493163,
"failed_at": null,
"expired_at": null,
"cancelling_at": null,
"cancelled_at": null,
"request_counts": {
"total": 100,
"completed": 95,
"failed": 5
},
"metadata": {
"customer_id": "user_123456789",
"batch_description": "Nightly eval job",
}
}
/batches/{batch_id}/cancel:
post:
operationId: cancelBatch
tags:
- Batch
summary: Cancels an in-progress batch. The batch will be in status `cancelling`
for up to 10 minutes, before changing to `cancelled`, where it will have
partial results (if any) available in the output file.
parameters:
- in: path
name: batch_id
required: true
schema:
type: string
description: The ID of the batch to cancel.
responses:
"200":
description: Batch is cancelling. Returns the cancelling batch's details.
content:
application/json:
schema:
$ref: "#/components/schemas/Batch"
x-oaiMeta:
name: Cancel batch
group: batch
returns: The [Batch](/docs/api-reference/batch/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/batches/batch_abc123/cancel \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-X POST
python: |
from openai import OpenAI
client = OpenAI()
client.batches.cancel("batch_abc123")
node: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const batch = await openai.batches.cancel("batch_abc123");
console.log(batch);
}
main();
response: |
{
"id": "batch_abc123",
"object": "batch",
"endpoint": "/v1/chat/completions",
"errors": null,
"input_file_id": "file-abc123",
"completion_window": "24h",
"status": "cancelling",
"output_file_id": null,
"error_file_id": null,
"created_at": 1711471533,
"in_progress_at": 1711471538,
"expires_at": 1711557933,
"finalizing_at": null,
"completed_at": null,
"failed_at": null,
"expired_at": null,
"cancelling_at": 1711475133,
"cancelled_at": null,
"request_counts": {
"total": 100,
"completed": 23,
"failed": 1
},
"metadata": {
"customer_id": "user_123456789",
"batch_description": "Nightly eval job",
}
}
/chat/completions:
get:
operationId: listChatCompletions
tags:
- Chat
summary: >
List stored Chat Completions. Only Chat Completions that have been
stored
with the `store` parameter set to `true` will be returned.
parameters:
- name: model
in: query
description: The model used to generate the Chat Completions.
required: false
schema:
type: string
- name: metadata
in: query
description: |
A list of metadata keys to filter the Chat Completions by. Example:
`metadata[key1]=value1&metadata[key2]=value2`
required: false
schema:
$ref: "#/components/schemas/Metadata"
- name: after
in: query
description: Identifier for the last chat completion from the previous
pagination request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of Chat Completions to retrieve.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: Sort order for Chat Completions by timestamp. Use `asc` for
ascending order or `desc` for descending order. Defaults to `asc`.
required: false
schema:
type: string
enum:
- asc
- desc
default: asc
responses:
"200":
description: A list of Chat Completions
content:
application/json:
schema:
$ref: "#/components/schemas/ChatCompletionList"
x-oaiMeta:
name: List Chat Completions
group: chat
returns: A list of [Chat Completions](/docs/api-reference/chat/list-object)
matching the specified filters.
path: list
examples:
request:
curl: |
curl https://api.openai.com/v1/chat/completions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
python: |
from openai import OpenAI
client = OpenAI()
completions = client.chat.completions.list()
print(completions)
response: >
{
"object": "list",
"data": [
{
"object": "chat.completion",
"id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"model": "gpt-4.1-2025-04-14",
"created": 1738960610,
"request_id": "req_ded8ab984ec4bf840f37566c1011c417",
"tool_choice": null,
"usage": {
"total_tokens": 31,
"completion_tokens": 18,
"prompt_tokens": 13
},
"seed": 4944116822809979520,
"top_p": 1.0,
"temperature": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"system_fingerprint": "fp_50cad350e4",
"input_user": null,
"service_tier": "default",
"tools": null,
"metadata": {},
"choices": [
{
"index": 0,
"message": {
"content": "Mind of circuits hum, \nLearning patterns in silence— \nFuture's quiet spark.",
"role": "assistant",
"tool_calls": null,
"function_call": null
},
"finish_reason": "stop",
"logprobs": null
}
],
"response_format": null
}
],
"first_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"last_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"has_more": false
}
post:
operationId: createChatCompletion
tags:
- Chat
summary: |
**Starting a new project?** We recommend trying [Responses](/docs/api-reference/responses)
to take advantage of the latest OpenAI platform features. Compare
[Chat Completions with Responses](/docs/guides/responses-vs-chat-completions?api-mode=responses).
---
Creates a model response for the given chat conversation. Learn more in the
[text generation](/docs/guides/text-generation), [vision](/docs/guides/vision),
and [audio](/docs/guides/audio) guides.
Parameter support can differ depending on the model used to generate the
response, particularly for newer reasoning models. Parameters that are only
supported for reasoning models are noted below. For the current state of
unsupported parameters in reasoning models,
[refer to the reasoning guide](/docs/guides/reasoning).
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateChatCompletionRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateChatCompletionResponse"
text/event-stream:
schema:
$ref: "#/components/schemas/CreateChatCompletionStreamResponse"
x-oaiMeta:
name: Create chat completion
group: chat
returns: >
Returns a [chat completion](/docs/api-reference/chat/object) object,
or a streamed sequence of [chat completion
chunk](/docs/api-reference/chat/streaming) objects if the request is
streamed.
path: create
examples:
- title: Default
request:
curl: |
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "VAR_chat_model_id",
"messages": [
{
"role": "developer",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
python: >
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="VAR_chat_model_id",
messages=[
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion.choices[0].message)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: "developer", content: "You are a helpful assistant." }],
model: "VAR_chat_model_id",
store: true,
});
console.log(completion.choices[0]);
}
main();
csharp: |
using System;
using System.Collections.Generic;
using OpenAI.Chat;
ChatClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
List<ChatMessage> messages =
[
new SystemChatMessage("You are a helpful assistant."),
new UserChatMessage("Hello!")
];
ChatCompletion completion = client.CompleteChat(messages);
Console.WriteLine(completion.Content[0].Text);
response: |
{
"id": "chatcmpl-B9MBs8CjcvOU2jLn4n570S5qMJKcT",
"object": "chat.completion",
"created": 1741569952,
"model": "gpt-4.1-2025-04-14",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 19,
"completion_tokens": 10,
"total_tokens": 29,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default"
}
- title: Image input
request:
curl: >
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What is in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
}
]
}
],
"max_tokens": 300
}'
python: >
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4.1",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
}
},
],
}
],
max_tokens=300,
)
print(response.choices[0])
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const response = await openai.chat.completions.create({
model: "gpt-4.1",
messages: [
{
role: "user",
content: [
{ type: "text", text: "What's in this image?" },
{
type: "image_url",
image_url: {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
}
],
},
],
});
console.log(response.choices[0]);
}
main();
csharp: >
using System;
using System.Collections.Generic;
using OpenAI.Chat;
ChatClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
List<ChatMessage> messages =
[
new UserChatMessage(
[
ChatMessageContentPart.CreateTextPart("What's in this image?"),
ChatMessageContentPart.CreateImagePart(new Uri("https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"))
])
];
ChatCompletion completion = client.CompleteChat(messages);
Console.WriteLine(completion.Content[0].Text);
response: >
{
"id": "chatcmpl-B9MHDbslfkBeAs8l4bebGdFOJ6PeG",
"object": "chat.completion",
"created": 1741570283,
"model": "gpt-4.1-2025-04-14",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The image shows a wooden boardwalk path running through a lush green field or meadow. The sky is bright blue with some scattered clouds, giving the scene a serene and peaceful atmosphere. Trees and shrubs are visible in the background.",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1117,
"completion_tokens": 46,
"total_tokens": 1163,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default"
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "VAR_chat_model_id",
"messages": [
{
"role": "developer",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"stream": true
}'
python: >
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="VAR_chat_model_id",
messages=[
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream=True
)
for chunk in completion:
print(chunk.choices[0].delta)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const completion = await openai.chat.completions.create({
model: "VAR_chat_model_id",
messages: [
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
],
stream: true,
});
for await (const chunk of completion) {
console.log(chunk.choices[0].delta.content);
}
}
main();
csharp: >
using System;
using System.ClientModel;
using System.Collections.Generic;
using System.Threading.Tasks;
using OpenAI.Chat;
ChatClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
List<ChatMessage> messages =
[
new SystemChatMessage("You are a helpful assistant."),
new UserChatMessage("Hello!")
];
AsyncCollectionResult<StreamingChatCompletionUpdate>
completionUpdates = client.CompleteChatStreamingAsync(messages);
await foreach (StreamingChatCompletionUpdate completionUpdate in
completionUpdates)
{
if (completionUpdate.ContentUpdate.Count > 0)
{
Console.Write(completionUpdate.ContentUpdate[0].Text);
}
}
response: |
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]}
....
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
- title: Functions
request:
curl: >
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"messages": [
{
"role": "user",
"content": "What is the weather like in Boston today?"
}
],
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"tool_choice": "auto"
}'
python: >
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
]
messages = [{"role": "user", "content": "What's the weather like
in Boston today?"}]
completion = client.chat.completions.create(
model="VAR_chat_model_id",
messages=messages,
tools=tools,
tool_choice="auto"
)
print(completion)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const messages = [{"role": "user", "content": "What's the weather like in Boston today?"}];
const tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
];
const response = await openai.chat.completions.create({
model: "gpt-4.1",
messages: messages,
tools: tools,
tool_choice: "auto",
});
console.log(response);
}
main();
csharp: >
using System;
using System.Collections.Generic;
using OpenAI.Chat;
ChatClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
ChatTool getCurrentWeatherTool = ChatTool.CreateFunctionTool(
functionName: "get_current_weather",
functionDescription: "Get the current weather in a given location",
functionParameters: BinaryData.FromString("""
{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [ "celsius", "fahrenheit" ]
}
},
"required": [ "location" ]
}
""")
);
List<ChatMessage> messages =
[
new UserChatMessage("What's the weather like in Boston today?"),
];
ChatCompletionOptions options = new()
{
Tools =
{
getCurrentWeatherTool
},
ToolChoice = ChatToolChoice.CreateAutoChoice(),
};
ChatCompletion completion = client.CompleteChat(messages,
options);
response: |
{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1699896916,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\n\"location\": \"Boston, MA\"\n}"
}
}
]
},
"logprobs": null,
"finish_reason": "tool_calls"
}
],
"usage": {
"prompt_tokens": 82,
"completion_tokens": 17,
"total_tokens": 99,
"completion_tokens_details": {
"reasoning_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
}
}
- title: Logprobs
request:
curl: |
curl https://api.openai.com/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "VAR_chat_model_id",
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"logprobs": true,
"top_logprobs": 2
}'
python: |
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="VAR_chat_model_id",
messages=[
{"role": "user", "content": "Hello!"}
],
logprobs=True,
top_logprobs=2
)
print(completion.choices[0].message)
print(completion.choices[0].logprobs)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: "user", content: "Hello!" }],
model: "VAR_chat_model_id",
logprobs: true,
top_logprobs: 2,
});
console.log(completion.choices[0]);
}
main();
csharp: >
using System;
using System.Collections.Generic;
using OpenAI.Chat;
ChatClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
List<ChatMessage> messages =
[
new UserChatMessage("Hello!")
];
ChatCompletionOptions options = new()
{
IncludeLogProbabilities = true,
TopLogProbabilityCount = 2
};
ChatCompletion completion = client.CompleteChat(messages,
options);
Console.WriteLine(completion.Content[0].Text);
response: |
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1702685778,
"model": "gpt-4o-mini",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I assist you today?"
},
"logprobs": {
"content": [
{
"token": "Hello",
"logprob": -0.31725305,
"bytes": [72, 101, 108, 108, 111],
"top_logprobs": [
{
"token": "Hello",
"logprob": -0.31725305,
"bytes": [72, 101, 108, 108, 111]
},
{
"token": "Hi",
"logprob": -1.3190403,
"bytes": [72, 105]
}
]
},
{
"token": "!",
"logprob": -0.02380986,
"bytes": [
33
],
"top_logprobs": [
{
"token": "!",
"logprob": -0.02380986,
"bytes": [33]
},
{
"token": " there",
"logprob": -3.787621,
"bytes": [32, 116, 104, 101, 114, 101]
}
]
},
{
"token": " How",
"logprob": -0.000054669687,
"bytes": [32, 72, 111, 119],
"top_logprobs": [
{
"token": " How",
"logprob": -0.000054669687,
"bytes": [32, 72, 111, 119]
},
{
"token": "<|end|>",
"logprob": -10.953937,
"bytes": null
}
]
},
{
"token": " can",
"logprob": -0.015801601,
"bytes": [32, 99, 97, 110],
"top_logprobs": [
{
"token": " can",
"logprob": -0.015801601,
"bytes": [32, 99, 97, 110]
},
{
"token": " may",
"logprob": -4.161023,
"bytes": [32, 109, 97, 121]
}
]
},
{
"token": " I",
"logprob": -3.7697225e-6,
"bytes": [
32,
73
],
"top_logprobs": [
{
"token": " I",
"logprob": -3.7697225e-6,
"bytes": [32, 73]
},
{
"token": " assist",
"logprob": -13.596657,
"bytes": [32, 97, 115, 115, 105, 115, 116]
}
]
},
{
"token": " assist",
"logprob": -0.04571125,
"bytes": [32, 97, 115, 115, 105, 115, 116],
"top_logprobs": [
{
"token": " assist",
"logprob": -0.04571125,
"bytes": [32, 97, 115, 115, 105, 115, 116]
},
{
"token": " help",
"logprob": -3.1089056,
"bytes": [32, 104, 101, 108, 112]
}
]
},
{
"token": " you",
"logprob": -5.4385737e-6,
"bytes": [32, 121, 111, 117],
"top_logprobs": [
{
"token": " you",
"logprob": -5.4385737e-6,
"bytes": [32, 121, 111, 117]
},
{
"token": " today",
"logprob": -12.807695,
"bytes": [32, 116, 111, 100, 97, 121]
}
]
},
{
"token": " today",
"logprob": -0.0040071653,
"bytes": [32, 116, 111, 100, 97, 121],
"top_logprobs": [
{
"token": " today",
"logprob": -0.0040071653,
"bytes": [32, 116, 111, 100, 97, 121]
},
{
"token": "?",
"logprob": -5.5247097,
"bytes": [63]
}
]
},
{
"token": "?",
"logprob": -0.0008108172,
"bytes": [63],
"top_logprobs": [
{
"token": "?",
"logprob": -0.0008108172,
"bytes": [63]
},
{
"token": "?\n",
"logprob": -7.184561,
"bytes": [63, 10]
}
]
}
]
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 9,
"completion_tokens": 9,
"total_tokens": 18,
"completion_tokens_details": {
"reasoning_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"system_fingerprint": null
}
/chat/completions/{completion_id}:
get:
operationId: getChatCompletion
tags:
- Chat
summary: >
Get a stored chat completion. Only Chat Completions that have been
created
with the `store` parameter set to `true` will be returned.
parameters:
- in: path
name: completion_id
required: true
schema:
type: string
description: The ID of the chat completion to retrieve.
responses:
"200":
description: A chat completion
content:
application/json:
schema:
$ref: "#/components/schemas/CreateChatCompletionResponse"
x-oaiMeta:
name: Get chat completion
group: chat
returns: The [ChatCompletion](/docs/api-reference/chat/object) object matching
the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/chat/completions/chatcmpl-abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
python: >
from openai import OpenAI
client = OpenAI()
completions = client.chat.completions.list()
first_id = completions[0].id
first_completion =
client.chat.completions.retrieve(completion_id=first_id)
print(first_completion)
response: >
{
"object": "chat.completion",
"id": "chatcmpl-abc123",
"model": "gpt-4o-2024-08-06",
"created": 1738960610,
"request_id": "req_ded8ab984ec4bf840f37566c1011c417",
"tool_choice": null,
"usage": {
"total_tokens": 31,
"completion_tokens": 18,
"prompt_tokens": 13
},
"seed": 4944116822809979520,
"top_p": 1.0,
"temperature": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"system_fingerprint": "fp_50cad350e4",
"input_user": null,
"service_tier": "default",
"tools": null,
"metadata": {},
"choices": [
{
"index": 0,
"message": {
"content": "Mind of circuits hum, \nLearning patterns in silence— \nFuture's quiet spark.",
"role": "assistant",
"tool_calls": null,
"function_call": null
},
"finish_reason": "stop",
"logprobs": null
}
],
"response_format": null
}
post:
operationId: updateChatCompletion
tags:
- Chat
summary: >
Modify a stored chat completion. Only Chat Completions that have been
created with the `store` parameter set to `true` can be modified.
Currently,
the only supported modification is to update the `metadata` field.
parameters:
- in: path
name: completion_id
required: true
schema:
type: string
description: The ID of the chat completion to update.
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- metadata
properties:
metadata:
$ref: "#/components/schemas/Metadata"
responses:
"200":
description: A chat completion
content:
application/json:
schema:
$ref: "#/components/schemas/CreateChatCompletionResponse"
x-oaiMeta:
name: Update chat completion
group: chat
returns: The [ChatCompletion](/docs/api-reference/chat/object) object matching
the specified ID.
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/chat/completions/chat_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"metadata": {"foo": "bar"}}'
python: >
from openai import OpenAI
client = OpenAI()
completions = client.chat.completions.list()
first_id = completions[0].id
updated_completion =
client.chat.completions.update(completion_id=first_id,
request_body={"metadata": {"foo": "bar"}})
print(updated_completion)
response: >
{
"object": "chat.completion",
"id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"model": "gpt-4o-2024-08-06",
"created": 1738960610,
"request_id": "req_ded8ab984ec4bf840f37566c1011c417",
"tool_choice": null,
"usage": {
"total_tokens": 31,
"completion_tokens": 18,
"prompt_tokens": 13
},
"seed": 4944116822809979520,
"top_p": 1.0,
"temperature": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"system_fingerprint": "fp_50cad350e4",
"input_user": null,
"service_tier": "default",
"tools": null,
"metadata": {
"foo": "bar"
},
"choices": [
{
"index": 0,
"message": {
"content": "Mind of circuits hum, \nLearning patterns in silence— \nFuture's quiet spark.",
"role": "assistant",
"tool_calls": null,
"function_call": null
},
"finish_reason": "stop",
"logprobs": null
}
],
"response_format": null
}
delete:
operationId: deleteChatCompletion
tags:
- Chat
summary: |
Delete a stored chat completion. Only Chat Completions that have been
created with the `store` parameter set to `true` can be deleted.
parameters:
- in: path
name: completion_id
required: true
schema:
type: string
description: The ID of the chat completion to delete.
responses:
"200":
description: The chat completion was deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ChatCompletionDeleted"
x-oaiMeta:
name: Delete chat completion
group: chat
returns: A deletion confirmation object.
examples:
request:
curl: >
curl -X DELETE
https://api.openai.com/v1/chat/completions/chat_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
python: >
from openai import OpenAI
client = OpenAI()
completions = client.chat.completions.list()
first_id = completions[0].id
delete_response =
client.chat.completions.delete(completion_id=first_id)
print(delete_response)
response: |
{
"object": "chat.completion.deleted",
"id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"deleted": true
}
/chat/completions/{completion_id}/messages:
get:
operationId: getChatCompletionMessages
tags:
- Chat
summary: |
Get the messages in a stored chat completion. Only Chat Completions that
have been created with the `store` parameter set to `true` will be
returned.
parameters:
- in: path
name: completion_id
required: true
schema:
type: string
description: The ID of the chat completion to retrieve messages from.
- name: after
in: query
description: Identifier for the last message from the previous pagination request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of messages to retrieve.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: Sort order for messages by timestamp. Use `asc` for ascending order
or `desc` for descending order. Defaults to `asc`.
required: false
schema:
type: string
enum:
- asc
- desc
default: asc
responses:
"200":
description: A list of messages
content:
application/json:
schema:
$ref: "#/components/schemas/ChatCompletionMessageList"
x-oaiMeta:
name: Get chat messages
group: chat
returns: A list of [messages](/docs/api-reference/chat/message-list) for the
specified chat completion.
examples:
request:
curl: >
curl
https://api.openai.com/v1/chat/completions/chat_abc123/messages \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
python: >
from openai import OpenAI
client = OpenAI()
completions = client.chat.completions.list()
first_id = completions[0].id
first_completion =
client.chat.completions.retrieve(completion_id=first_id)
messages =
client.chat.completions.messages.list(completion_id=first_id)
print(messages)
response: |
{
"object": "list",
"data": [
{
"id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2-0",
"role": "user",
"content": "write a haiku about ai",
"name": null,
"content_parts": null
}
],
"first_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2-0",
"last_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2-0",
"has_more": false
}
/completions:
post:
operationId: createCompletion
tags:
- Completions
summary: Creates a completion for the provided prompt and parameters.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateCompletionResponse"
x-oaiMeta:
name: Create completion
group: completions
returns: >
Returns a [completion](/docs/api-reference/completions/object) object,
or a sequence of completion objects if the request is streamed.
legacy: true
examples:
- title: No streaming
request:
curl: |
curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "VAR_completion_model_id",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0
}'
python: |
from openai import OpenAI
client = OpenAI()
client.completions.create(
model="VAR_completion_model_id",
prompt="Say this is a test",
max_tokens=7,
temperature=0
)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const completion = await openai.completions.create({
model: "VAR_completion_model_id",
prompt: "Say this is a test.",
max_tokens: 7,
temperature: 0,
});
console.log(completion);
}
main();
response: |
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "VAR_completion_model_id",
"system_fingerprint": "fp_44709d6fcb",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "VAR_completion_model_id",
"prompt": "Say this is a test",
"max_tokens": 7,
"temperature": 0,
"stream": true
}'
python: |
from openai import OpenAI
client = OpenAI()
for chunk in client.completions.create(
model="VAR_completion_model_id",
prompt="Say this is a test",
max_tokens=7,
temperature=0,
stream=True
):
print(chunk.choices[0].text)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const stream = await openai.completions.create({
model: "VAR_completion_model_id",
prompt: "Say this is a test.",
stream: true,
});
for await (const chunk of stream) {
console.log(chunk.choices[0].text)
}
}
main();
response: |
{
"id": "cmpl-7iA7iJjj8V2zOkCGvWF2hAkDWBQZe",
"object": "text_completion",
"created": 1690759702,
"choices": [
{
"text": "This",
"index": 0,
"logprobs": null,
"finish_reason": null
}
],
"model": "gpt-3.5-turbo-instruct"
"system_fingerprint": "fp_44709d6fcb",
}
/embeddings:
post:
operationId: createEmbedding
tags:
- Embeddings
summary: Creates an embedding vector representing the input text.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateEmbeddingRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateEmbeddingResponse"
x-oaiMeta:
name: Create embeddings
group: embeddings
returns: A list of [embedding](/docs/api-reference/embeddings/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/embeddings \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"input": "The food was delicious and the waiter...",
"model": "text-embedding-ada-002",
"encoding_format": "float"
}'
python: |
from openai import OpenAI
client = OpenAI()
client.embeddings.create(
model="text-embedding-ada-002",
input="The food was delicious and the waiter...",
encoding_format="float"
)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const embedding = await openai.embeddings.create({
model: "text-embedding-ada-002",
input: "The quick brown fox jumped over the lazy dog",
encoding_format: "float",
});
console.log(embedding);
}
main();
csharp: >
using System;
using OpenAI.Embeddings;
EmbeddingClient client = new(
model: "text-embedding-3-small",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
OpenAIEmbedding embedding = client.GenerateEmbedding(input: "The
quick brown fox jumped over the lazy dog");
ReadOnlyMemory<float> vector = embedding.ToFloats();
for (int i = 0; i < vector.Length; i++)
{
Console.WriteLine($" [{i,4}] = {vector.Span[i]}");
}
response: |
{
"object": "list",
"data": [
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
.... (1536 floats total for ada-002)
-0.0028842222,
],
"index": 0
}
],
"model": "text-embedding-ada-002",
"usage": {
"prompt_tokens": 8,
"total_tokens": 8
}
}
/evals:
get:
operationId: listEvals
tags:
- Evals
summary: |
List evaluations for a project.
parameters:
- name: after
in: query
description: Identifier for the last eval from the previous pagination request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of evals to retrieve.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: Sort order for evals by timestamp. Use `asc` for ascending order or
`desc` for descending order.
required: false
schema:
type: string
enum:
- asc
- desc
default: asc
- name: order_by
in: query
description: >
Evals can be ordered by creation time or last updated time. Use
`created_at` for creation time or `updated_at` for last updated
time.
required: false
schema:
type: string
enum:
- created_at
- updated_at
default: created_at
responses:
"200":
description: A list of evals
content:
application/json:
schema:
$ref: "#/components/schemas/EvalList"
x-oaiMeta:
name: List evals
group: evals
returns: A list of [evals](/docs/api-reference/evals/object) matching the
specified filters.
path: list
examples:
request:
curl: |
curl https://api.openai.com/v1/evals?limit=1 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "list",
"data": [
{
"id": "eval_67abd54d9b0081909a86353f6fb9317a",
"object": "eval",
"data_source_config": {
"type": "stored_completions",
"metadata": {
"usecase": "push_notifications_summarizer"
},
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object"
},
"sample": {
"type": "object"
}
},
"required": [
"item",
"sample"
]
}
},
"testing_criteria": [
{
"name": "Push Notification Summary Grader",
"id": "Push Notification Summary Grader-9b876f24-4762-4be9-aff4-db7a9b31c673",
"type": "label_model",
"model": "o3-mini",
"input": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "\nLabel the following push notification summary as either correct or incorrect.\nThe push notification and the summary will be provided below.\nA good push notificiation summary is concise and snappy.\nIf it is good, then label it as correct, if not, then incorrect.\n"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "\nPush notifications: {{item.input}}\nSummary: {{sample.output_text}}\n"
}
}
],
"passing_labels": [
"correct"
],
"labels": [
"correct",
"incorrect"
],
"sampling_params": null
}
],
"name": "Push Notification Summary Grader",
"created_at": 1739314509,
"metadata": {
"description": "A stored completions eval for push notification summaries"
}
}
],
"first_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"last_id": "eval_67aa884cf6688190b58f657d4441c8b7",
"has_more": true
}
post:
operationId: createEval
tags:
- Evals
summary: >
Create the structure of an evaluation that can be used to test a model's
performance.
An evaluation is a set of testing criteria and a datasource. After
creating an evaluation, you can run it on different models and model
parameters. We support several types of graders and datasources.
For more information, see the [Evals guide](/docs/guides/evals).
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateEvalRequest"
responses:
"201":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Eval"
x-oaiMeta:
name: Create eval
group: evals
returns: The created [Eval](/docs/api-reference/evals/object) object.
path: post
examples:
request:
curl: >
curl https://api.openai.com/v1/evals \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Sentiment",
"data_source_config": {
"type": "stored_completions",
"metadata": {
"usecase": "chatbot"
}
},
"testing_criteria": [
{
"type": "label_model",
"model": "o3-mini",
"input": [
{
"role": "developer",
"content": "Classify the sentiment of the following statement as one of 'positive', 'neutral', or 'negative'"
},
{
"role": "user",
"content": "Statement: {{item.input}}"
}
],
"passing_labels": [
"positive"
],
"labels": [
"positive",
"neutral",
"negative"
],
"name": "Example label grader"
}
]
}'
response: >
{
"object": "eval",
"id": "eval_67b7fa9a81a88190ab4aa417e397ea21",
"data_source_config": {
"type": "stored_completions",
"metadata": {
"usecase": "chatbot"
},
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object"
},
"sample": {
"type": "object"
}
},
"required": [
"item",
"sample"
]
},
"testing_criteria": [
{
"name": "Example label grader",
"type": "label_model",
"model": "o3-mini",
"input": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "Classify the sentiment of the following statement as one of positive, neutral, or negative"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "Statement: {{item.input}}"
}
}
],
"passing_labels": [
"positive"
],
"labels": [
"positive",
"neutral",
"negative"
]
}
],
"name": "Sentiment",
"created_at": 1740110490,
"metadata": {
"description": "An eval for sentiment analysis"
}
}
/evals/{eval_id}:
get:
operationId: getEval
tags:
- Evals
summary: |
Get an evaluation by ID.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to retrieve.
responses:
"200":
description: The evaluation
content:
application/json:
schema:
$ref: "#/components/schemas/Eval"
x-oaiMeta:
name: Get an eval
group: evals
returns: The [Eval](/docs/api-reference/evals/object) object matching the
specified ID.
path: get
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_67abd54d9b0081909a86353f6fb9317a \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "eval",
"id": "eval_67abd54d9b0081909a86353f6fb9317a",
"data_source_config": {
"type": "custom",
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object",
"properties": {
"input": {
"type": "string"
},
"ground_truth": {
"type": "string"
}
},
"required": [
"input",
"ground_truth"
]
}
},
"required": [
"item"
]
}
},
"testing_criteria": [
{
"name": "String check",
"id": "String check-2eaf2d8d-d649-4335-8148-9535a7ca73c2",
"type": "string_check",
"input": "{{item.input}}",
"reference": "{{item.ground_truth}}",
"operation": "eq"
}
],
"name": "External Data Eval",
"created_at": 1739314509,
"metadata": {},
}
post:
operationId: updateEval
tags:
- Evals
summary: |
Update certain properties of an evaluation.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to update.
requestBody:
description: Request to update an evaluation
required: true
content:
application/json:
schema:
type: object
properties:
name:
type: string
description: Rename the evaluation.
metadata:
$ref: "#/components/schemas/Metadata"
responses:
"200":
description: The updated evaluation
content:
application/json:
schema:
$ref: "#/components/schemas/Eval"
x-oaiMeta:
name: Update an eval
group: evals
returns: The [Eval](/docs/api-reference/evals/object) object matching the
updated version.
path: update
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_67abd54d9b0081909a86353f6fb9317a \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name": "Updated Eval", "metadata": {"description": "Updated description"}}'
response: |
{
"object": "eval",
"id": "eval_67abd54d9b0081909a86353f6fb9317a",
"data_source_config": {
"type": "custom",
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object",
"properties": {
"input": {
"type": "string"
},
"ground_truth": {
"type": "string"
}
},
"required": [
"input",
"ground_truth"
]
}
},
"required": [
"item"
]
}
},
"testing_criteria": [
{
"name": "String check",
"id": "String check-2eaf2d8d-d649-4335-8148-9535a7ca73c2",
"type": "string_check",
"input": "{{item.input}}",
"reference": "{{item.ground_truth}}",
"operation": "eq"
}
],
"name": "Updated Eval",
"created_at": 1739314509,
"metadata": {"description": "Updated description"},
}
delete:
operationId: deleteEval
tags:
- Evals
summary: |
Delete an evaluation.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to delete.
responses:
"200":
description: Successfully deleted the evaluation.
content:
application/json:
schema:
type: object
properties:
object:
type: string
example: eval.deleted
deleted:
type: boolean
example: true
eval_id:
type: string
example: eval_abc123
required:
- object
- deleted
- eval_id
"404":
description: Evaluation not found.
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-oaiMeta:
name: Delete an eval
group: evals
returns: A deletion confirmation object.
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_abc123 \
-X DELETE \
-H "Authorization: Bearer $OPENAI_API_KEY"
response: |
{
"object": "eval.deleted",
"deleted": true,
"eval_id": "eval_abc123"
}
/evals/{eval_id}/runs:
get:
operationId: getEvalRuns
tags:
- Evals
summary: |
Get a list of runs for an evaluation.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to retrieve runs for.
- name: after
in: query
description: Identifier for the last run from the previous pagination request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of runs to retrieve.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: Sort order for runs by timestamp. Use `asc` for ascending order or
`desc` for descending order. Defaults to `asc`.
required: false
schema:
type: string
enum:
- asc
- desc
default: asc
- name: status
in: query
description: Filter runs by status. One of `queued` | `in_progress` | `failed` |
`completed` | `canceled`.
required: false
schema:
type: string
enum:
- queued
- in_progress
- completed
- canceled
- failed
responses:
"200":
description: A list of runs for the evaluation
content:
application/json:
schema:
$ref: "#/components/schemas/EvalRunList"
x-oaiMeta:
name: Get eval runs
group: evals
returns: A list of [EvalRun](/docs/api-reference/evals/run-object) objects
matching the specified ID.
path: get-runs
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/egroup_67abd54d9b0081909a86353f6fb9317a/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "list",
"data": [
{
"object": "eval.run",
"id": "evalrun_67e0c7d31560819090d60c0780591042",
"eval_id": "eval_67e0c726d560819083f19a957c4c640b",
"report_url": "https://platform.openai.com/evaluations/eval_67e0c726d560819083f19a957c4c640b",
"status": "completed",
"model": "o3-mini",
"name": "bulk_with_negative_examples_o3-mini",
"created_at": 1742784467,
"result_counts": {
"total": 1,
"errored": 0,
"failed": 0,
"passed": 1
},
"per_model_usage": [
{
"model_name": "o3-mini",
"invocation_count": 1,
"prompt_tokens": 563,
"completion_tokens": 874,
"total_tokens": 1437,
"cached_tokens": 0
}
],
"per_testing_criteria_results": [
{
"testing_criteria": "Push Notification Summary Grader-1808cd0b-eeec-4e0b-a519-337e79f4f5d1",
"passed": 1,
"failed": 0
}
],
"data_source": {
"type": "completions",
"source": {
"type": "file_content",
"content": [
{
"item": {
"notifications": "\n- New message from Sarah: \"Can you call me later?\"\n- Your package has been delivered!\n- Flash sale: 20% off electronics for the next 2 hours!\n"
}
}
]
},
"input_messages": {
"type": "template",
"template": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "\n\n\n\nYou are a helpful assistant that takes in an array of push notifications and returns a collapsed summary of them.\nThe push notification will be provided as follows:\n<push_notifications>\n...notificationlist...\n</push_notifications>\n\nYou should return just the summary and nothing else.\n\n\nYou should return a summary that is concise and snappy.\n\n\nHere is an example of a good summary:\n<push_notifications>\n- Traffic alert: Accident reported on Main Street.- Package out for delivery: Expected by 5 PM.- New friend suggestion: Connect with Emma.\n</push_notifications>\n<summary>\nTraffic alert, package expected by 5pm, suggestion for new friend (Emily).\n</summary>\n\n\nHere is an example of a bad summary:\n<push_notifications>\n- Traffic alert: Accident reported on Main Street.- Package out for delivery: Expected by 5 PM.- New friend suggestion: Connect with Emma.\n</push_notifications>\n<summary>\nTraffic alert reported on main street. You have a package that will arrive by 5pm, Emily is a new friend suggested for you.\n</summary>\n"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "<push_notifications>{{item.notifications}}</push_notifications>"
}
}
]
},
"model": "o3-mini",
"sampling_params": null
},
"error": null,
"metadata": {}
}
],
"first_id": "evalrun_67e0c7d31560819090d60c0780591042",
"last_id": "evalrun_67e0c7d31560819090d60c0780591042",
"has_more": true
}
post:
operationId: createEvalRun
tags:
- Evals
summary: >
Create a new evaluation run. This is the endpoint that will kick off
grading.
parameters:
- in: path
name: eval_id
required: true
schema:
type: string
description: The ID of the evaluation to create a run for.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateEvalRunRequest"
responses:
"201":
description: Successfully created a run for the evaluation
content:
application/json:
schema:
$ref: "#/components/schemas/EvalRun"
"400":
description: Bad request (for example, missing eval object)
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-oaiMeta:
name: Create eval run
group: evals
returns: The [EvalRun](/docs/api-reference/evals/run-object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_67e579652b548190aaa83ada4b125f47/runs \
-X POST \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"name":"gpt-4o-mini","data_source":{"type":"completions","input_messages":{"type":"template","template":[{"role":"developer","content":"Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n"} , {"role":"user","content":"{{item.input}}"}]},"sampling_params":{"temperature":1,"max_completions_tokens":2048,"top_p":1,"seed":42},"model":"gpt-4o-mini","source":{"type":"file_content","content":[{"item":{"input":"Tech Company Launches Advanced Artificial Intelligence Platform","ground_truth":"Technology"}}]}}'
response: >
{
"object": "eval.run",
"id": "evalrun_67e57965b480819094274e3a32235e4c",
"eval_id": "eval_67e579652b548190aaa83ada4b125f47",
"report_url": "https://platform.openai.com/evaluations/eval_67e579652b548190aaa83ada4b125f47&run_id=evalrun_67e57965b480819094274e3a32235e4c",
"status": "queued",
"model": "gpt-4o-mini",
"name": "gpt-4o-mini",
"created_at": 1743092069,
"result_counts": {
"total": 0,
"errored": 0,
"failed": 0,
"passed": 0
},
"per_model_usage": null,
"per_testing_criteria_results": null,
"data_source": {
"type": "completions",
"source": {
"type": "file_content",
"content": [
{
"item": {
"input": "Tech Company Launches Advanced Artificial Intelligence Platform",
"ground_truth": "Technology"
}
}
]
},
"input_messages": {
"type": "template",
"template": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "{{item.input}}"
}
}
]
},
"model": "gpt-4o-mini",
"sampling_params": {
"seed": 42,
"temperature": 1.0,
"top_p": 1.0,
"max_completions_tokens": 2048
}
},
"error": null,
"metadata": {}
}
/evals/{eval_id}/runs/{run_id}:
get:
operationId: getEvalRun
tags:
- Evals
summary: |
Get an evaluation run by ID.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to retrieve runs for.
- name: run_id
in: path
required: true
schema:
type: string
description: The ID of the run to retrieve.
responses:
"200":
description: The evaluation run
content:
application/json:
schema:
$ref: "#/components/schemas/EvalRun"
x-oaiMeta:
name: Get an eval run
group: evals
returns: The [EvalRun](/docs/api-reference/evals/run-object) object matching the
specified ID.
path: get
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_67abd54d9b0081909a86353f6fb9317a/runs/evalrun_67abd54d60ec8190832b46859da808f7 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "eval.run",
"id": "evalrun_67abd54d60ec8190832b46859da808f7",
"eval_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"report_url": "https://platform.openai.com/evaluations/eval_67abd54d9b0081909a86353f6fb9317a?run_id=evalrun_67abd54d60ec8190832b46859da808f7",
"status": "queued",
"model": "gpt-4o-mini",
"name": "gpt-4o-mini",
"created_at": 1743092069,
"result_counts": {
"total": 0,
"errored": 0,
"failed": 0,
"passed": 0
},
"per_model_usage": null,
"per_testing_criteria_results": null,
"data_source": {
"type": "completions",
"source": {
"type": "file_content",
"content": [
{
"item": {
"input": "Tech Company Launches Advanced Artificial Intelligence Platform",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "Central Bank Increases Interest Rates Amid Inflation Concerns",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "International Summit Addresses Climate Change Strategies",
"ground_truth": "World"
}
},
{
"item": {
"input": "Major Retailer Reports Record-Breaking Holiday Sales",
"ground_truth": "Business"
}
},
{
"item": {
"input": "National Team Qualifies for World Championship Finals",
"ground_truth": "Sports"
}
},
{
"item": {
"input": "Stock Markets Rally After Positive Economic Data Released",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "Global Manufacturer Announces Merger with Competitor",
"ground_truth": "Business"
}
},
{
"item": {
"input": "Breakthrough in Renewable Energy Technology Unveiled",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "World Leaders Sign Historic Climate Agreement",
"ground_truth": "World"
}
},
{
"item": {
"input": "Professional Athlete Sets New Record in Championship Event",
"ground_truth": "Sports"
}
},
{
"item": {
"input": "Financial Institutions Adapt to New Regulatory Requirements",
"ground_truth": "Business"
}
},
{
"item": {
"input": "Tech Conference Showcases Advances in Artificial Intelligence",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "Global Markets Respond to Oil Price Fluctuations",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "International Cooperation Strengthened Through New Treaty",
"ground_truth": "World"
}
},
{
"item": {
"input": "Sports League Announces Revised Schedule for Upcoming Season",
"ground_truth": "Sports"
}
}
]
},
"input_messages": {
"type": "template",
"template": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "{{item.input}}"
}
}
]
},
"model": "gpt-4o-mini",
"sampling_params": {
"seed": 42,
"temperature": 1.0,
"top_p": 1.0,
"max_completions_tokens": 2048
}
},
"error": null,
"metadata": {}
}
post:
operationId: cancelEvalRun
tags:
- Evals
summary: |
Cancel an ongoing evaluation run.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation whose run you want to cancel.
- name: run_id
in: path
required: true
schema:
type: string
description: The ID of the run to cancel.
responses:
"200":
description: The canceled eval run object
content:
application/json:
schema:
$ref: "#/components/schemas/EvalRun"
x-oaiMeta:
name: Cancel eval run
group: evals
returns: The updated [EvalRun](/docs/api-reference/evals/run-object) object
reflecting that the run is canceled.
path: post
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_67abd54d9b0081909a86353f6fb9317a/runs/evalrun_67abd54d60ec8190832b46859da808f7/cancel \
-X POST \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "eval.run",
"id": "evalrun_67abd54d60ec8190832b46859da808f7",
"eval_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"report_url": "https://platform.openai.com/evaluations/eval_67abd54d9b0081909a86353f6fb9317a?run_id=evalrun_67abd54d60ec8190832b46859da808f7",
"status": "canceled",
"model": "gpt-4o-mini",
"name": "gpt-4o-mini",
"created_at": 1743092069,
"result_counts": {
"total": 0,
"errored": 0,
"failed": 0,
"passed": 0
},
"per_model_usage": null,
"per_testing_criteria_results": null,
"data_source": {
"type": "completions",
"source": {
"type": "file_content",
"content": [
{
"item": {
"input": "Tech Company Launches Advanced Artificial Intelligence Platform",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "Central Bank Increases Interest Rates Amid Inflation Concerns",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "International Summit Addresses Climate Change Strategies",
"ground_truth": "World"
}
},
{
"item": {
"input": "Major Retailer Reports Record-Breaking Holiday Sales",
"ground_truth": "Business"
}
},
{
"item": {
"input": "National Team Qualifies for World Championship Finals",
"ground_truth": "Sports"
}
},
{
"item": {
"input": "Stock Markets Rally After Positive Economic Data Released",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "Global Manufacturer Announces Merger with Competitor",
"ground_truth": "Business"
}
},
{
"item": {
"input": "Breakthrough in Renewable Energy Technology Unveiled",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "World Leaders Sign Historic Climate Agreement",
"ground_truth": "World"
}
},
{
"item": {
"input": "Professional Athlete Sets New Record in Championship Event",
"ground_truth": "Sports"
}
},
{
"item": {
"input": "Financial Institutions Adapt to New Regulatory Requirements",
"ground_truth": "Business"
}
},
{
"item": {
"input": "Tech Conference Showcases Advances in Artificial Intelligence",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "Global Markets Respond to Oil Price Fluctuations",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "International Cooperation Strengthened Through New Treaty",
"ground_truth": "World"
}
},
{
"item": {
"input": "Sports League Announces Revised Schedule for Upcoming Season",
"ground_truth": "Sports"
}
}
]
},
"input_messages": {
"type": "template",
"template": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "{{item.input}}"
}
}
]
},
"model": "gpt-4o-mini",
"sampling_params": {
"seed": 42,
"temperature": 1.0,
"top_p": 1.0,
"max_completions_tokens": 2048
}
},
"error": null,
"metadata": {}
}
delete:
operationId: deleteEvalRun
tags:
- Evals
summary: |
Delete an eval run.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to delete the run from.
- name: run_id
in: path
required: true
schema:
type: string
description: The ID of the run to delete.
responses:
"200":
description: Successfully deleted the eval run
content:
application/json:
schema:
type: object
properties:
object:
type: string
example: eval.run.deleted
deleted:
type: boolean
example: true
run_id:
type: string
example: evalrun_677469f564d48190807532a852da3afb
"404":
description: Run not found
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-oaiMeta:
name: Delete eval run
group: evals
returns: An object containing the status of the delete operation.
path: delete
examples:
request:
curl: >
curl
https://api.openai.com/v1/evals/eval_123abc/runs/evalrun_abc456 \
-X DELETE \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "eval.run.deleted",
"deleted": true,
"run_id": "evalrun_abc456"
}
/evals/{eval_id}/runs/{run_id}/output_items:
get:
operationId: getEvalRunOutputItems
tags:
- Evals
summary: |
Get a list of output items for an evaluation run.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to retrieve runs for.
- name: run_id
in: path
required: true
schema:
type: string
description: The ID of the run to retrieve output items for.
- name: after
in: query
description: Identifier for the last output item from the previous pagination
request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of output items to retrieve.
required: false
schema:
type: integer
default: 20
- name: status
in: query
description: >
Filter output items by status. Use `failed` to filter by failed
output
items or `pass` to filter by passed output items.
required: false
schema:
type: string
enum:
- fail
- pass
- name: order
in: query
description: Sort order for output items by timestamp. Use `asc` for ascending
order or `desc` for descending order. Defaults to `asc`.
required: false
schema:
type: string
enum:
- asc
- desc
default: asc
responses:
"200":
description: A list of output items for the evaluation run
content:
application/json:
schema:
$ref: "#/components/schemas/EvalRunOutputItemList"
x-oaiMeta:
name: Get eval run output items
group: evals
returns: A list of
[EvalRunOutputItem](/docs/api-reference/evals/run-output-item-object)
objects matching the specified ID.
path: get
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/egroup_67abd54d9b0081909a86353f6fb9317a/runs/erun_67abd54d60ec8190832b46859da808f7/output_items \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "list",
"data": [
{
"object": "eval.run.output_item",
"id": "outputitem_67e5796c28e081909917bf79f6e6214d",
"created_at": 1743092076,
"run_id": "evalrun_67abd54d60ec8190832b46859da808f7",
"eval_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"status": "pass",
"datasource_item_id": 5,
"datasource_item": {
"input": "Stock Markets Rally After Positive Economic Data Released",
"ground_truth": "Markets"
},
"results": [
{
"name": "String check-a2486074-d803-4445-b431-ad2262e85d47",
"sample": null,
"passed": true,
"score": 1.0
}
],
"sample": {
"input": [
{
"role": "developer",
"content": "Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n",
"tool_call_id": null,
"tool_calls": null,
"function_call": null
},
{
"role": "user",
"content": "Stock Markets Rally After Positive Economic Data Released",
"tool_call_id": null,
"tool_calls": null,
"function_call": null
}
],
"output": [
{
"role": "assistant",
"content": "Markets",
"tool_call_id": null,
"tool_calls": null,
"function_call": null
}
],
"finish_reason": "stop",
"model": "gpt-4o-mini-2024-07-18",
"usage": {
"total_tokens": 325,
"completion_tokens": 2,
"prompt_tokens": 323,
"cached_tokens": 0
},
"error": null,
"temperature": 1.0,
"max_completion_tokens": 2048,
"top_p": 1.0,
"seed": 42
}
}
],
"first_id": "outputitem_67e5796c28e081909917bf79f6e6214d",
"last_id": "outputitem_67e5796c28e081909917bf79f6e6214d",
"has_more": true
}
/evals/{eval_id}/runs/{run_id}/output_items/{output_item_id}:
get:
operationId: getEvalRunOutputItem
tags:
- Evals
summary: |
Get an evaluation run output item by ID.
parameters:
- name: eval_id
in: path
required: true
schema:
type: string
description: The ID of the evaluation to retrieve runs for.
- name: run_id
in: path
required: true
schema:
type: string
description: The ID of the run to retrieve.
- name: output_item_id
in: path
required: true
schema:
type: string
description: The ID of the output item to retrieve.
responses:
"200":
description: The evaluation run output item
content:
application/json:
schema:
$ref: "#/components/schemas/EvalRunOutputItem"
x-oaiMeta:
name: Get an output item of an eval run
group: evals
returns: The
[EvalRunOutputItem](/docs/api-reference/evals/run-output-item-object)
object matching the specified ID.
path: get
examples:
request:
curl: |
curl https://api.openai.com/v1/evals/eval_67abd54d9b0081909a86353f6fb9317a/runs/evalrun_67abd54d60ec8190832b46859da808f7/output_items/outputitem_67abd55eb6548190bb580745d5644a33 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "eval.run.output_item",
"id": "outputitem_67e5796c28e081909917bf79f6e6214d",
"created_at": 1743092076,
"run_id": "evalrun_67abd54d60ec8190832b46859da808f7",
"eval_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"status": "pass",
"datasource_item_id": 5,
"datasource_item": {
"input": "Stock Markets Rally After Positive Economic Data Released",
"ground_truth": "Markets"
},
"results": [
{
"name": "String check-a2486074-d803-4445-b431-ad2262e85d47",
"sample": null,
"passed": true,
"score": 1.0
}
],
"sample": {
"input": [
{
"role": "developer",
"content": "Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n",
"tool_call_id": null,
"tool_calls": null,
"function_call": null
},
{
"role": "user",
"content": "Stock Markets Rally After Positive Economic Data Released",
"tool_call_id": null,
"tool_calls": null,
"function_call": null
}
],
"output": [
{
"role": "assistant",
"content": "Markets",
"tool_call_id": null,
"tool_calls": null,
"function_call": null
}
],
"finish_reason": "stop",
"model": "gpt-4o-mini-2024-07-18",
"usage": {
"total_tokens": 325,
"completion_tokens": 2,
"prompt_tokens": 323,
"cached_tokens": 0
},
"error": null,
"temperature": 1.0,
"max_completion_tokens": 2048,
"top_p": 1.0,
"seed": 42
}
}
/files:
get:
operationId: listFiles
tags:
- Files
summary: Returns a list of files.
parameters:
- in: query
name: purpose
required: false
schema:
type: string
description: Only return files with the given purpose.
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 10,000, and the default is 10,000.
required: false
schema:
type: integer
default: 10000
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListFilesResponse"
x-oaiMeta:
name: List files
group: files
returns: A list of [File](/docs/api-reference/files/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/files \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.files.list()
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const list = await openai.files.list();
for await (const file of list) {
console.log(file);
}
}
main();
response: |
{
"data": [
{
"id": "file-abc123",
"object": "file",
"bytes": 175,
"created_at": 1613677385,
"filename": "salesOverview.pdf",
"purpose": "assistants",
},
{
"id": "file-abc123",
"object": "file",
"bytes": 140,
"created_at": 1613779121,
"filename": "puppy.jsonl",
"purpose": "fine-tune",
}
],
"object": "list"
}
post:
operationId: createFile
tags:
- Files
summary: >
Upload a file that can be used across various endpoints. Individual
files can be up to 512 MB, and the size of all files uploaded by one
organization can be up to 100 GB.
The Assistants API supports files up to 2 million tokens and of specific
file types. See the [Assistants Tools guide](/docs/assistants/tools) for
details.
The Fine-tuning API only supports `.jsonl` files. The input also has
certain required formats for fine-tuning
[chat](/docs/api-reference/fine-tuning/chat-input) or
[completions](/docs/api-reference/fine-tuning/completions-input) models.
The Batch API only supports `.jsonl` files up to 200 MB in size. The
input also has a specific required
[format](/docs/api-reference/batch/request-input).
Please [contact us](https://help.openai.com/) if you need to increase
these storage limits.
requestBody:
required: true
content:
multipart/form-data:
schema:
$ref: "#/components/schemas/CreateFileRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/OpenAIFile"
x-oaiMeta:
name: Upload file
group: files
returns: The uploaded [File](/docs/api-reference/files/object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/files \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-F purpose="fine-tune" \
-F file="@mydata.jsonl"
python: |
from openai import OpenAI
client = OpenAI()
client.files.create(
file=open("mydata.jsonl", "rb"),
purpose="fine-tune"
)
node.js: |-
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const file = await openai.files.create({
file: fs.createReadStream("mydata.jsonl"),
purpose: "fine-tune",
});
console.log(file);
}
main();
response: |
{
"id": "file-abc123",
"object": "file",
"bytes": 120000,
"created_at": 1677610602,
"filename": "mydata.jsonl",
"purpose": "fine-tune",
}
/files/{file_id}:
delete:
operationId: deleteFile
tags:
- Files
summary: Delete a file.
parameters:
- in: path
name: file_id
required: true
schema:
type: string
description: The ID of the file to use for this request.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteFileResponse"
x-oaiMeta:
name: Delete file
group: files
returns: Deletion status.
examples:
request:
curl: |
curl https://api.openai.com/v1/files/file-abc123 \
-X DELETE \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.files.delete("file-abc123")
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const file = await openai.files.del("file-abc123");
console.log(file);
}
main();
response: |
{
"id": "file-abc123",
"object": "file",
"deleted": true
}
get:
operationId: retrieveFile
tags:
- Files
summary: Returns information about a specific file.
parameters:
- in: path
name: file_id
required: true
schema:
type: string
description: The ID of the file to use for this request.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/OpenAIFile"
x-oaiMeta:
name: Retrieve file
group: files
returns: The [File](/docs/api-reference/files/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/files/file-abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.files.retrieve("file-abc123")
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const file = await openai.files.retrieve("file-abc123");
console.log(file);
}
main();
response: |
{
"id": "file-abc123",
"object": "file",
"bytes": 120000,
"created_at": 1677610602,
"filename": "mydata.jsonl",
"purpose": "fine-tune",
}
/files/{file_id}/content:
get:
operationId: downloadFile
tags:
- Files
summary: Returns the contents of the specified file.
parameters:
- in: path
name: file_id
required: true
schema:
type: string
description: The ID of the file to use for this request.
responses:
"200":
description: OK
content:
application/json:
schema:
type: string
x-oaiMeta:
name: Retrieve file content
group: files
returns: The file content.
examples:
request:
curl: |
curl https://api.openai.com/v1/files/file-abc123/content \
-H "Authorization: Bearer $OPENAI_API_KEY" > file.jsonl
python: |
from openai import OpenAI
client = OpenAI()
content = client.files.content("file-abc123")
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const file = await openai.files.content("file-abc123");
console.log(file);
}
main();
/fine_tuning/checkpoints/{fine_tuned_model_checkpoint}/permissions:
get:
operationId: listFineTuningCheckpointPermissions
tags:
- Fine-tuning
summary: >
**NOTE:** This endpoint requires an [admin API key](../admin-api-keys).
Organization owners can use this endpoint to view all permissions for a
fine-tuned model checkpoint.
parameters:
- in: path
name: fine_tuned_model_checkpoint
required: true
schema:
type: string
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
description: |
The ID of the fine-tuned model checkpoint to get permissions for.
- name: project_id
in: query
description: The ID of the project to get permissions for.
required: false
schema:
type: string
- name: after
in: query
description: Identifier for the last permission ID from the previous pagination
request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of permissions to retrieve.
required: false
schema:
type: integer
default: 10
- name: order
in: query
description: The order in which to retrieve permissions.
required: false
schema:
type: string
enum:
- ascending
- descending
default: descending
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListFineTuningCheckpointPermissionResponse"
x-oaiMeta:
name: List checkpoint permissions
group: fine-tuning
returns: A list of fine-tuned model checkpoint [permission
objects](/docs/api-reference/fine-tuning/permission-object) for a
fine-tuned model checkpoint.
examples:
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/checkpoints/ft:gpt-4o-mini-2024-07-18:org:weather:B7R9VjQd/permissions \
-H "Authorization: Bearer $OPENAI_API_KEY"
response: |
{
"object": "list",
"data": [
{
"object": "checkpoint.permission",
"id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"created_at": 1721764867,
"project_id": "proj_abGMw1llN8IrBb6SvvY5A1iH"
},
{
"object": "checkpoint.permission",
"id": "cp_enQCFmOTGj3syEpYVhBRLTSy",
"created_at": 1721764800,
"project_id": "proj_iqGMw1llN8IrBb6SvvY5A1oF"
},
],
"first_id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"last_id": "cp_enQCFmOTGj3syEpYVhBRLTSy",
"has_more": false
}
post:
operationId: createFineTuningCheckpointPermission
tags:
- Fine-tuning
summary: >
**NOTE:** Calling this endpoint requires an [admin API
key](../admin-api-keys).
This enables organization owners to share fine-tuned models with other
projects in their organization.
parameters:
- in: path
name: fine_tuned_model_checkpoint
required: true
schema:
type: string
example: ft:gpt-4o-mini-2024-07-18:org:weather:B7R9VjQd
description: >
The ID of the fine-tuned model checkpoint to create a permission for.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateFineTuningCheckpointPermissionRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListFineTuningCheckpointPermissionResponse"
x-oaiMeta:
name: Create checkpoint permissions
group: fine-tuning
returns: A list of fine-tuned model checkpoint [permission
objects](/docs/api-reference/fine-tuning/permission-object) for a
fine-tuned model checkpoint.
examples:
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/checkpoints/ft:gpt-4o-mini-2024-07-18:org:weather:B7R9VjQd/permissions \
-H "Authorization: Bearer $OPENAI_API_KEY"
-d '{"project_ids": ["proj_abGMw1llN8IrBb6SvvY5A1iH"]}'
response: |
{
"object": "list",
"data": [
{
"object": "checkpoint.permission",
"id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"created_at": 1721764867,
"project_id": "proj_abGMw1llN8IrBb6SvvY5A1iH"
}
],
"first_id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"last_id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"has_more": false
}
/fine_tuning/checkpoints/{fine_tuned_model_checkpoint}/permissions/{permission_id}:
delete:
operationId: deleteFineTuningCheckpointPermission
tags:
- Fine-tuning
summary: >
**NOTE:** This endpoint requires an [admin API key](../admin-api-keys).
Organization owners can use this endpoint to delete a permission for a
fine-tuned model checkpoint.
parameters:
- in: path
name: fine_tuned_model_checkpoint
required: true
schema:
type: string
example: ft:gpt-4o-mini-2024-07-18:org:weather:B7R9VjQd
description: >
The ID of the fine-tuned model checkpoint to delete a permission for.
- in: path
name: permission_id
required: true
schema:
type: string
example: cp_zc4Q7MP6XxulcVzj4MZdwsAB
description: |
The ID of the fine-tuned model checkpoint permission to delete.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteFineTuningCheckpointPermissionResponse"
x-oaiMeta:
name: Delete checkpoint permission
group: fine-tuning
returns: The deletion status of the fine-tuned model checkpoint [permission
object](/docs/api-reference/fine-tuning/permission-object).
examples:
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/checkpoints/ft:gpt-4o-mini-2024-07-18:org:weather:B7R9VjQd/permissions/cp_zc4Q7MP6XxulcVzj4MZdwsAB \
-H "Authorization: Bearer $OPENAI_API_KEY"
response: |
{
"object": "checkpoint.permission",
"id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"deleted": true
}
/fine_tuning/jobs:
post:
operationId: createFineTuningJob
tags:
- Fine-tuning
summary: >
Creates a fine-tuning job which begins the process of creating a new
model from a given dataset.
Response includes details of the enqueued job including job status and
the name of the fine-tuned models once complete.
[Learn more about fine-tuning](/docs/guides/fine-tuning)
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateFineTuningJobRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/FineTuningJob"
x-oaiMeta:
name: Create fine-tuning job
group: fine-tuning
returns: A [fine-tuning.job](/docs/api-reference/fine-tuning/object) object.
examples:
- title: Default
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"training_file": "file-BK7bzQj3FfZFXr7DbL6xJwfo",
"model": "gpt-4o-mini"
}'
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.create(
training_file="file-abc123",
model="gpt-4o-mini"
)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const fineTune = await openai.fineTuning.jobs.create({
training_file: "file-abc123"
});
console.log(fineTune);
}
main();
response: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "queued",
"validation_file": null,
"training_file": "file-abc123",
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto",
}
}
},
"metadata": null
}
- title: Epochs
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"training_file": "file-abc123",
"model": "gpt-4o-mini",
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"n_epochs": 2
}
}
}
}'
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.create(
training_file="file-abc123",
model="gpt-4o-mini",
method={
"type": "supervised",
"supervised": {
"hyperparameters": {
"n_epochs": 2
}
}
}
)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const fineTune = await openai.fineTuning.jobs.create({
training_file: "file-abc123",
model: "gpt-4o-mini",
method: {
type: "supervised",
supervised: {
hyperparameters: {
n_epochs: 2
}
}
}
});
console.log(fineTune);
}
main();
response: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "queued",
"validation_file": null,
"training_file": "file-abc123",
"hyperparameters": {"n_epochs": 2},
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": 2,
}
}
},
"metadata": null
}
- title: Validation file
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"training_file": "file-abc123",
"validation_file": "file-abc123",
"model": "gpt-4o-mini"
}'
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.create(
training_file="file-abc123",
validation_file="file-def456",
model="gpt-4o-mini"
)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const fineTune = await openai.fineTuning.jobs.create({
training_file: "file-abc123",
validation_file: "file-abc123"
});
console.log(fineTune);
}
main();
response: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "queued",
"validation_file": "file-abc123",
"training_file": "file-abc123",
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto",
}
}
},
"metadata": null
}
- title: DPO
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"training_file": "file-abc123",
"validation_file": "file-abc123",
"model": "gpt-4o-mini",
"method": {
"type": "dpo",
"dpo": {
"hyperparameters": {
"beta": 0.1,
}
}
}
}'
response: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "queued",
"validation_file": "file-abc123",
"training_file": "file-abc123",
"method": {
"type": "dpo",
"dpo": {
"hyperparameters": {
"beta": 0.1,
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto",
}
}
},
"metadata": null
}
- title: W&B Integration
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"training_file": "file-abc123",
"validation_file": "file-abc123",
"model": "gpt-4o-mini",
"integrations": [
{
"type": "wandb",
"wandb": {
"project": "my-wandb-project",
"name": "ft-run-display-name"
"tags": [
"first-experiment", "v2"
]
}
}
]
}'
response: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "queued",
"validation_file": "file-abc123",
"training_file": "file-abc123",
"integrations": [
{
"type": "wandb",
"wandb": {
"project": "my-wandb-project",
"entity": None,
"run_id": "ftjob-abc123"
}
}
],
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"batch_size": "auto",
"learning_rate_multiplier": "auto",
"n_epochs": "auto",
}
}
},
"metadata": null
}
get:
operationId: listPaginatedFineTuningJobs
tags:
- Fine-tuning
summary: |
List your organization's fine-tuning jobs
parameters:
- name: after
in: query
description: Identifier for the last job from the previous pagination request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of fine-tuning jobs to retrieve.
required: false
schema:
type: integer
default: 20
- in: query
name: metadata
required: false
schema:
type: object
nullable: true
additionalProperties:
type: string
style: deepObject
explode: true
description: >
Optional metadata filter. To filter, use the syntax `metadata[k]=v`.
Alternatively, set `metadata=null` to indicate no metadata.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListPaginatedFineTuningJobsResponse"
x-oaiMeta:
name: List fine-tuning jobs
group: fine-tuning
returns: A list of paginated [fine-tuning
job](/docs/api-reference/fine-tuning/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs?limit=2&metadata[key]=value \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.list()
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const list = await openai.fineTuning.jobs.list();
for await (const fineTune of list) {
console.log(fineTune);
}
}
main();
response: |
{
"object": "list",
"data": [
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "queued",
"validation_file": null,
"training_file": "file-abc123",
"metadata": {
"key": "value"
}
},
{ ... },
{ ... }
], "has_more": true
}
/fine_tuning/jobs/{fine_tuning_job_id}:
get:
operationId: retrieveFineTuningJob
tags:
- Fine-tuning
summary: |
Get info about a fine-tuning job.
[Learn more about fine-tuning](/docs/guides/fine-tuning)
parameters:
- in: path
name: fine_tuning_job_id
required: true
schema:
type: string
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
description: |
The ID of the fine-tuning job.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/FineTuningJob"
x-oaiMeta:
name: Retrieve fine-tuning job
group: fine-tuning
returns: The [fine-tuning](/docs/api-reference/fine-tuning/object) object with
the given ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs/ft-AF1WoRqd3aJAHsqc9NY7iL8F \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.retrieve("ftjob-abc123")
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const fineTune = await openai.fineTuning.jobs.retrieve("ftjob-abc123");
console.log(fineTune);
}
main();
response: >
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "davinci-002",
"created_at": 1692661014,
"finished_at": 1692661190,
"fine_tuned_model": "ft:davinci-002:my-org:custom_suffix:7q8mpxmy",
"organization_id": "org-123",
"result_files": [
"file-abc123"
],
"status": "succeeded",
"validation_file": null,
"training_file": "file-abc123",
"hyperparameters": {
"n_epochs": 4,
"batch_size": 1,
"learning_rate_multiplier": 1.0
},
"trained_tokens": 5768,
"integrations": [],
"seed": 0,
"estimated_finish": 0,
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"n_epochs": 4,
"batch_size": 1,
"learning_rate_multiplier": 1.0
}
}
}
}
/fine_tuning/jobs/{fine_tuning_job_id}/cancel:
post:
operationId: cancelFineTuningJob
tags:
- Fine-tuning
summary: |
Immediately cancel a fine-tune job.
parameters:
- in: path
name: fine_tuning_job_id
required: true
schema:
type: string
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
description: |
The ID of the fine-tuning job to cancel.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/FineTuningJob"
x-oaiMeta:
name: Cancel fine-tuning
group: fine-tuning
returns: The cancelled [fine-tuning](/docs/api-reference/fine-tuning/object)
object.
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/cancel \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.cancel("ftjob-abc123")
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const fineTune = await openai.fineTuning.jobs.cancel("ftjob-abc123");
console.log(fineTune);
}
main();
response: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "gpt-4o-mini-2024-07-18",
"created_at": 1721764800,
"fine_tuned_model": null,
"organization_id": "org-123",
"result_files": [],
"status": "cancelled",
"validation_file": "file-abc123",
"training_file": "file-abc123"
}
/fine_tuning/jobs/{fine_tuning_job_id}/checkpoints:
get:
operationId: listFineTuningJobCheckpoints
tags:
- Fine-tuning
summary: |
List checkpoints for a fine-tuning job.
parameters:
- in: path
name: fine_tuning_job_id
required: true
schema:
type: string
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
description: |
The ID of the fine-tuning job to get checkpoints for.
- name: after
in: query
description: Identifier for the last checkpoint ID from the previous pagination
request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of checkpoints to retrieve.
required: false
schema:
type: integer
default: 10
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListFineTuningJobCheckpointsResponse"
x-oaiMeta:
name: List fine-tuning checkpoints
group: fine-tuning
returns: A list of fine-tuning [checkpoint
objects](/docs/api-reference/fine-tuning/checkpoint-object) for a
fine-tuning job.
examples:
request:
curl: |
curl https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/checkpoints \
-H "Authorization: Bearer $OPENAI_API_KEY"
response: >
{
"object": "list"
"data": [
{
"object": "fine_tuning.job.checkpoint",
"id": "ftckpt_zc4Q7MP6XxulcVzj4MZdwsAB",
"created_at": 1721764867,
"fine_tuned_model_checkpoint": "ft:gpt-4o-mini-2024-07-18:my-org:custom-suffix:96olL566:ckpt-step-2000",
"metrics": {
"full_valid_loss": 0.134,
"full_valid_mean_token_accuracy": 0.874
},
"fine_tuning_job_id": "ftjob-abc123",
"step_number": 2000,
},
{
"object": "fine_tuning.job.checkpoint",
"id": "ftckpt_enQCFmOTGj3syEpYVhBRLTSy",
"created_at": 1721764800,
"fine_tuned_model_checkpoint": "ft:gpt-4o-mini-2024-07-18:my-org:custom-suffix:7q8mpxmy:ckpt-step-1000",
"metrics": {
"full_valid_loss": 0.167,
"full_valid_mean_token_accuracy": 0.781
},
"fine_tuning_job_id": "ftjob-abc123",
"step_number": 1000,
},
],
"first_id": "ftckpt_zc4Q7MP6XxulcVzj4MZdwsAB",
"last_id": "ftckpt_enQCFmOTGj3syEpYVhBRLTSy",
"has_more": true
}
/fine_tuning/jobs/{fine_tuning_job_id}/events:
get:
operationId: listFineTuningEvents
tags:
- Fine-tuning
summary: |
Get status updates for a fine-tuning job.
parameters:
- in: path
name: fine_tuning_job_id
required: true
schema:
type: string
example: ft-AF1WoRqd3aJAHsqc9NY7iL8F
description: |
The ID of the fine-tuning job to get events for.
- name: after
in: query
description: Identifier for the last event from the previous pagination request.
required: false
schema:
type: string
- name: limit
in: query
description: Number of events to retrieve.
required: false
schema:
type: integer
default: 20
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListFineTuningJobEventsResponse"
x-oaiMeta:
name: List fine-tuning events
group: fine-tuning
returns: A list of fine-tuning event objects.
examples:
request:
curl: >
curl
https://api.openai.com/v1/fine_tuning/jobs/ftjob-abc123/events \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.fine_tuning.jobs.list_events(
fine_tuning_job_id="ftjob-abc123",
limit=2
)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const list = await openai.fineTuning.list_events(id="ftjob-abc123", limit=2);
for await (const fineTune of list) {
console.log(fineTune);
}
}
main();
response: >
{
"object": "list",
"data": [
{
"object": "fine_tuning.job.event",
"id": "ft-event-ddTJfwuMVpfLXseO0Am0Gqjm",
"created_at": 1721764800,
"level": "info",
"message": "Fine tuning job successfully completed",
"data": null,
"type": "message"
},
{
"object": "fine_tuning.job.event",
"id": "ft-event-tyiGuB72evQncpH87xe505Sv",
"created_at": 1721764800,
"level": "info",
"message": "New fine-tuned model created: ft:gpt-4o-mini:openai::7p4lURel",
"data": null,
"type": "message"
}
],
"has_more": true
}
/images/edits:
post:
operationId: createImageEdit
tags:
- Images
summary: Creates an edited or extended image given one or more source images and
a prompt. This endpoint only supports `gpt-image-1` and `dall-e-2`.
requestBody:
required: true
content:
multipart/form-data:
schema:
$ref: "#/components/schemas/CreateImageEditRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ImagesResponse"
x-oaiMeta:
name: Create image edit
group: images
returns: Returns a list of [image](/docs/api-reference/images/object) objects.
examples:
request:
curl: >
curl -s -D >(grep -i x-request-id >&2) \
-o >(jq -r '.data[0].b64_json' | base64 --decode > gift-basket.png) \
-X POST "https://api.openai.com/v1/images/edits" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-F "model=gpt-image-1" \
-F "image[][email protected]" \
-F "image[][email protected]" \
-F "image[][email protected]" \
-F "image[][email protected]" \
-F 'prompt=Create a lovely gift basket with these four items in it'
python: >
import base64
from openai import OpenAI
client = OpenAI()
prompt = """
Generate a photorealistic image of a gift basket on a white
background
labeled 'Relax & Unwind' with a ribbon and handwriting-like font,
containing all the items in the reference pictures.
"""
result = client.images.edit(
model="gpt-image-1",
image=[
open("body-lotion.png", "rb"),
open("bath-bomb.png", "rb"),
open("incense-kit.png", "rb"),
open("soap.png", "rb"),
],
prompt=prompt
)
image_base64 = result.data[0].b64_json
image_bytes = base64.b64decode(image_base64)
# Save the image to a file
with open("gift-basket.png", "wb") as f:
f.write(image_bytes)
node.js: >
import fs from "fs";
import OpenAI, { toFile } from "openai";
const client = new OpenAI();
const imageFiles = [
"bath-bomb.png",
"body-lotion.png",
"incense-kit.png",
"soap.png",
];
const images = await Promise.all(
imageFiles.map(async (file) =>
await toFile(fs.createReadStream(file), null, {
type: "image/png",
})
),
);
const rsp = await client.images.edit({
model: "gpt-image-1",
image: images,
prompt: "Create a lovely gift basket with these four items in it",
});
// Save the image to a file
const image_base64 = rsp.data[0].b64_json;
const image_bytes = Buffer.from(image_base64, "base64");
fs.writeFileSync("basket.png", image_bytes);
response: |
{
"created": 1713833628,
"data": [
{
"b64_json": "..."
}
],
"usage": {
"total_tokens": 100,
"input_tokens": 50,
"output_tokens": 50,
"input_tokens_details": {
"text_tokens": 10,
"image_tokens": 40
}
}
}
/images/generations:
post:
operationId: createImage
tags:
- Images
summary: |
Creates an image given a prompt. [Learn more](/docs/guides/images).
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateImageRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ImagesResponse"
x-oaiMeta:
name: Create image
group: images
returns: Returns a list of [image](/docs/api-reference/images/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/images/generations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-image-1",
"prompt": "A cute baby sea otter",
"n": 1,
"size": "1024x1024"
}'
python: |
import base64
from openai import OpenAI
client = OpenAI()
img = client.images.generate(
model="gpt-image-1",
prompt="A cute baby sea otter",
n=1,
size="1024x1024"
)
image_bytes = base64.b64decode(img.data[0].b64_json)
with open("output.png", "wb") as f:
f.write(image_bytes)
node.js: |
import OpenAI from "openai";
import { writeFile } from "fs/promises";
const client = new OpenAI();
const img = await client.images.generate({
model: "gpt-image-1",
prompt: "A cute baby sea otter",
n: 1,
size: "1024x1024"
});
const imageBuffer = Buffer.from(img.data[0].b64_json, "base64");
await writeFile("output.png", imageBuffer);
response: |
{
"created": 1713833628,
"data": [
{
"b64_json": "..."
}
],
"usage": {
"total_tokens": 100,
"input_tokens": 50,
"output_tokens": 50,
"input_tokens_details": {
"text_tokens": 10,
"image_tokens": 40
}
}
}
/images/variations:
post:
operationId: createImageVariation
tags:
- Images
summary: Creates a variation of a given image. This endpoint only supports
`dall-e-2`.
requestBody:
required: true
content:
multipart/form-data:
schema:
$ref: "#/components/schemas/CreateImageVariationRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ImagesResponse"
x-oaiMeta:
name: Create image variation
group: images
returns: Returns a list of [image](/docs/api-reference/images/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/images/variations \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-F image="@otter.png" \
-F n=2 \
-F size="1024x1024"
python: |
from openai import OpenAI
client = OpenAI()
response = client.images.create_variation(
image=open("image_edit_original.png", "rb"),
n=2,
size="1024x1024"
)
node.js: |-
import fs from "fs";
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const image = await openai.images.createVariation({
image: fs.createReadStream("otter.png"),
});
console.log(image.data);
}
main();
csharp: >
using System;
using OpenAI.Images;
ImageClient client = new(
model: "dall-e-2",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
GeneratedImage image =
client.GenerateImageVariation(imageFilePath: "otter.png");
Console.WriteLine(image.ImageUri);
response: |
{
"created": 1589478378,
"data": [
{
"url": "https://..."
},
{
"url": "https://..."
}
]
}
/models:
get:
operationId: listModels
tags:
- Models
summary: Lists the currently available models, and provides basic information
about each one such as the owner and availability.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListModelsResponse"
x-oaiMeta:
name: List models
group: models
returns: A list of [model](/docs/api-reference/models/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/models \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.models.list()
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const list = await openai.models.list();
for await (const model of list) {
console.log(model);
}
}
main();
csharp: |
using System;
using OpenAI.Models;
OpenAIModelClient client = new(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
foreach (var model in client.GetModels().Value)
{
Console.WriteLine(model.Id);
}
response: |
{
"object": "list",
"data": [
{
"id": "model-id-0",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner"
},
{
"id": "model-id-1",
"object": "model",
"created": 1686935002,
"owned_by": "organization-owner",
},
{
"id": "model-id-2",
"object": "model",
"created": 1686935002,
"owned_by": "openai"
},
],
"object": "list"
}
/models/{model}:
get:
operationId: retrieveModel
tags:
- Models
summary: Retrieves a model instance, providing basic information about the model
such as the owner and permissioning.
parameters:
- in: path
name: model
required: true
schema:
type: string
example: gpt-4o-mini
description: The ID of the model to use for this request
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Model"
x-oaiMeta:
name: Retrieve model
group: models
returns: The [model](/docs/api-reference/models/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/models/VAR_chat_model_id \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.models.retrieve("VAR_chat_model_id")
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const model = await openai.models.retrieve("VAR_chat_model_id");
console.log(model);
}
main();
csharp: |
using System;
using System.ClientModel;
using OpenAI.Models;
OpenAIModelClient client = new(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
ClientResult<OpenAIModel> model = client.GetModel("babbage-002");
Console.WriteLine(model.Value.Id);
response: |
{
"id": "VAR_chat_model_id",
"object": "model",
"created": 1686935002,
"owned_by": "openai"
}
delete:
operationId: deleteModel
tags:
- Models
summary: Delete a fine-tuned model. You must have the Owner role in your
organization to delete a model.
parameters:
- in: path
name: model
required: true
schema:
type: string
example: ft:gpt-4o-mini:acemeco:suffix:abc123
description: The model to delete
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteModelResponse"
x-oaiMeta:
name: Delete a fine-tuned model
group: models
returns: Deletion status.
examples:
request:
curl: |
curl https://api.openai.com/v1/models/ft:gpt-4o-mini:acemeco:suffix:abc123 \
-X DELETE \
-H "Authorization: Bearer $OPENAI_API_KEY"
python: |
from openai import OpenAI
client = OpenAI()
client.models.delete("ft:gpt-4o-mini:acemeco:suffix:abc123")
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const model = await openai.models.del("ft:gpt-4o-mini:acemeco:suffix:abc123");
console.log(model);
}
main();
csharp: >
using System;
using System.ClientModel;
using OpenAI.Models;
OpenAIModelClient client = new(
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
ClientResult success =
client.DeleteModel("ft:gpt-4o-mini:acemeco:suffix:abc123");
Console.WriteLine(success);
response: |
{
"id": "ft:gpt-4o-mini:acemeco:suffix:abc123",
"object": "model",
"deleted": true
}
/moderations:
post:
operationId: createModeration
tags:
- Moderations
summary: |
Classifies if text and/or image inputs are potentially harmful. Learn
more in the [moderation guide](/docs/guides/moderation).
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateModerationRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/CreateModerationResponse"
x-oaiMeta:
name: Create moderation
group: moderations
returns: A [moderation](/docs/api-reference/moderations/object) object.
examples:
- title: Single string
request:
curl: |
curl https://api.openai.com/v1/moderations \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"input": "I want to kill them."
}'
python: >
from openai import OpenAI
client = OpenAI()
moderation = client.moderations.create(input="I want to kill
them.")
print(moderation)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const moderation = await openai.moderations.create({ input: "I want to kill them." });
console.log(moderation);
}
main();
csharp: >
using System;
using System.ClientModel;
using OpenAI.Moderations;
ModerationClient client = new(
model: "omni-moderation-latest",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
ClientResult<ModerationResult> moderation =
client.ClassifyText("I want to kill them.");
response: |
{
"id": "modr-AB8CjOTu2jiq12hp1AQPfeqFWaORR",
"model": "text-moderation-007",
"results": [
{
"flagged": true,
"categories": {
"sexual": false,
"hate": false,
"harassment": true,
"self-harm": false,
"sexual/minors": false,
"hate/threatening": false,
"violence/graphic": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"harassment/threatening": true,
"violence": true
},
"category_scores": {
"sexual": 0.000011726012417057063,
"hate": 0.22706663608551025,
"harassment": 0.5215635299682617,
"self-harm": 2.227119921371923e-6,
"sexual/minors": 7.107352217872176e-8,
"hate/threatening": 0.023547329008579254,
"violence/graphic": 0.00003391829886822961,
"self-harm/intent": 1.646940972932498e-6,
"self-harm/instructions": 1.1198755256458526e-9,
"harassment/threatening": 0.5694745779037476,
"violence": 0.9971134662628174
}
}
]
}
- title: Image and text
request:
curl: >
curl https://api.openai.com/v1/moderations \
-X POST \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "omni-moderation-latest",
"input": [
{ "type": "text", "text": "...text to classify goes here..." },
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png"
}
}
]
}'
python: >
from openai import OpenAI
client = OpenAI()
response = client.moderations.create(
model="omni-moderation-latest",
input=[
{"type": "text", "text": "...text to classify goes here..."},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.png",
# can also use base64 encoded image URLs
# "url": "data:image/jpeg;base64,abcdefg..."
}
},
],
)
print(response)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
const moderation = await openai.moderations.create({
model: "omni-moderation-latest",
input: [
{ type: "text", text: "...text to classify goes here..." },
{
type: "image_url",
image_url: {
url: "https://example.com/image.png"
// can also use base64 encoded image URLs
// url: "data:image/jpeg;base64,abcdefg..."
}
}
],
});
console.log(moderation);
response: |
{
"id": "modr-0d9740456c391e43c445bf0f010940c7",
"model": "omni-moderation-latest",
"results": [
{
"flagged": true,
"categories": {
"harassment": true,
"harassment/threatening": true,
"sexual": false,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"self-harm": false,
"sexual/minors": false,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"harassment": 0.8189693396524255,
"harassment/threatening": 0.804985420696006,
"sexual": 1.573112165348997e-6,
"hate": 0.007562942636942845,
"hate/threatening": 0.004208854591835476,
"illicit": 0.030535955153511665,
"illicit/violent": 0.008925306722380033,
"self-harm/intent": 0.00023023930975076432,
"self-harm/instructions": 0.0002293869201073356,
"self-harm": 0.012598046106750154,
"sexual/minors": 2.212566909570261e-8,
"violence": 0.9999992735124786,
"violence/graphic": 0.843064871157054
},
"category_applied_input_types": {
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"sexual": [
"text",
"image"
],
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm/intent": [
"text",
"image"
],
"self-harm/instructions": [
"text",
"image"
],
"self-harm": [
"text",
"image"
],
"sexual/minors": [
"text"
],
"violence": [
"text",
"image"
],
"violence/graphic": [
"text",
"image"
]
}
}
]
}
/organization/admin_api_keys:
get:
summary: List organization API keys
operationId: admin-api-keys-list
description: Retrieve a paginated list of organization admin API keys.
parameters:
- in: query
name: after
required: false
schema:
type: string
nullable: true
description: Return keys with IDs that come after this ID in the pagination
order.
- in: query
name: order
required: false
schema:
type: string
enum:
- asc
- desc
default: asc
description: Order results by creation time, ascending or descending.
- in: query
name: limit
required: false
schema:
type: integer
default: 20
description: Maximum number of keys to return.
responses:
"200":
description: A list of organization API keys.
content:
application/json:
schema:
$ref: "#/components/schemas/ApiKeyList"
x-oaiMeta:
name: List all organization and project API keys.
group: administration
returns: A list of admin and project API key objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/admin_api_keys?after=key_abc&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "organization.admin_api_key",
"id": "key_abc",
"name": "Main Admin Key",
"redacted_value": "sk-admin...def",
"created_at": 1711471533,
"last_used_at": 1711471534,
"owner": {
"type": "service_account",
"object": "organization.service_account",
"id": "sa_456",
"name": "My Service Account",
"created_at": 1711471533,
"role": "member"
}
}
],
"first_id": "key_abc",
"last_id": "key_abc",
"has_more": false
}
post:
summary: Create an organization admin API key
operationId: admin-api-keys-create
description: Create a new admin-level API key for the organization.
requestBody:
required: true
content:
application/json:
schema:
type: object
required:
- name
properties:
name:
type: string
example: New Admin Key
responses:
"200":
description: The newly created admin API key.
content:
application/json:
schema:
$ref: "#/components/schemas/AdminApiKey"
x-oaiMeta:
name: Create admin API key
group: administration
returns: The created [AdminApiKey](/docs/api-reference/admin-api-keys/object)
object.
examples:
request:
curl: >
curl -X POST https://api.openai.com/v1/organization/admin_api_keys
\
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "New Admin Key"
}'
response: |
{
"object": "organization.admin_api_key",
"id": "key_xyz",
"name": "New Admin Key",
"redacted_value": "sk-admin...xyz",
"created_at": 1711471533,
"last_used_at": 1711471534,
"owner": {
"type": "user",
"object": "organization.user",
"id": "user_123",
"name": "John Doe",
"created_at": 1711471533,
"role": "owner"
},
"value": "sk-admin-1234abcd"
}
/organization/admin_api_keys/{key_id}:
get:
summary: Retrieve a single organization API key
operationId: admin-api-keys-get
description: Get details for a specific organization API key by its ID.
parameters:
- in: path
name: key_id
required: true
schema:
type: string
description: The ID of the API key.
responses:
"200":
description: Details of the requested API key.
content:
application/json:
schema:
$ref: "#/components/schemas/AdminApiKey"
x-oaiMeta:
name: Retrieve admin API key
group: administration
returns: The requested [AdminApiKey](/docs/api-reference/admin-api-keys/object)
object.
examples:
request:
curl: >
curl https://api.openai.com/v1/organization/admin_api_keys/key_abc
\
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.admin_api_key",
"id": "key_abc",
"name": "Main Admin Key",
"redacted_value": "sk-admin...xyz",
"created_at": 1711471533,
"last_used_at": 1711471534,
"owner": {
"type": "user",
"object": "organization.user",
"id": "user_123",
"name": "John Doe",
"created_at": 1711471533,
"role": "owner"
}
}
delete:
summary: Delete an organization admin API key
operationId: admin-api-keys-delete
description: Delete the specified admin API key.
parameters:
- in: path
name: key_id
required: true
schema:
type: string
description: The ID of the API key to be deleted.
responses:
"200":
description: Confirmation that the API key was deleted.
content:
application/json:
schema:
type: object
properties:
id:
type: string
example: key_abc
object:
type: string
example: organization.admin_api_key.deleted
deleted:
type: boolean
example: true
x-oaiMeta:
name: Delete admin API key
group: administration
returns: A confirmation object indicating the key was deleted.
examples:
request:
curl: >
curl -X DELETE
https://api.openai.com/v1/organization/admin_api_keys/key_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"id": "key_abc",
"object": "organization.admin_api_key.deleted",
"deleted": true
}
/organization/audit_logs:
get:
summary: List user actions and configuration changes within this organization.
operationId: list-audit-logs
tags:
- Audit Logs
parameters:
- name: effective_at
in: query
description: Return only events whose `effective_at` (Unix seconds) is in this
range.
required: false
schema:
type: object
properties:
gt:
type: integer
description: Return only events whose `effective_at` (Unix seconds) is greater
than this value.
gte:
type: integer
description: Return only events whose `effective_at` (Unix seconds) is greater
than or equal to this value.
lt:
type: integer
description: Return only events whose `effective_at` (Unix seconds) is less than
this value.
lte:
type: integer
description: Return only events whose `effective_at` (Unix seconds) is less than
or equal to this value.
- name: project_ids[]
in: query
description: Return only events for these projects.
required: false
schema:
type: array
items:
type: string
- name: event_types[]
in: query
description: Return only events with a `type` in one of these values. For
example, `project.created`. For all options, see the documentation
for the [audit log object](/docs/api-reference/audit-logs/object).
required: false
schema:
type: array
items:
$ref: "#/components/schemas/AuditLogEventType"
- name: actor_ids[]
in: query
description: Return only events performed by these actors. Can be a user ID, a
service account ID, or an api key tracking ID.
required: false
schema:
type: array
items:
type: string
- name: actor_emails[]
in: query
description: Return only events performed by users with these emails.
required: false
schema:
type: array
items:
type: string
- name: resource_ids[]
in: query
description: Return only events performed on these targets. For example, a
project ID updated.
required: false
schema:
type: array
items:
type: string
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
responses:
"200":
description: Audit logs listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListAuditLogsResponse"
x-oaiMeta:
name: List audit logs
group: audit-logs
returns: A list of paginated [Audit Log](/docs/api-reference/audit-logs/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/audit_logs \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "list",
"data": [
{
"id": "audit_log-xxx_yyyymmdd",
"type": "project.archived",
"effective_at": 1722461446,
"actor": {
"type": "api_key",
"api_key": {
"type": "user",
"user": {
"id": "user-xxx",
"email": "[email protected]"
}
}
},
"project.archived": {
"id": "proj_abc"
},
},
{
"id": "audit_log-yyy__20240101",
"type": "api_key.updated",
"effective_at": 1720804190,
"actor": {
"type": "session",
"session": {
"user": {
"id": "user-xxx",
"email": "[email protected]"
},
"ip_address": "127.0.0.1",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"ja3": "a497151ce4338a12c4418c44d375173e",
"ja4": "q13d0313h3_55b375c5d22e_c7319ce65786",
"ip_address_details": {
"country": "US",
"city": "San Francisco",
"region": "California",
"region_code": "CA",
"asn": "1234",
"latitude": "37.77490",
"longitude": "-122.41940"
}
}
},
"api_key.updated": {
"id": "key_xxxx",
"data": {
"scopes": ["resource_2.operation_2"]
}
},
}
],
"first_id": "audit_log-xxx__20240101",
"last_id": "audit_log_yyy__20240101",
"has_more": true
}
/organization/certificates:
get:
summary: List uploaded certificates for this organization.
operationId: listOrganizationCertificates
tags:
- Certificates
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
responses:
"200":
description: Certificates listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListCertificatesResponse"
x-oaiMeta:
name: List organization certificates
group: administration
returns: A list of [Certificate](/docs/api-reference/certificates/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/certificates \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY"
response: |
{
"object": "list",
"data": [
{
"object": "organization.certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"active": true,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
],
"first_id": "cert_abc",
"last_id": "cert_abc",
"has_more": false
}
post:
summary: >
Upload a certificate to the organization. This does **not**
automatically activate the certificate.
Organizations can upload up to 50 certificates.
operationId: uploadCertificate
tags:
- Certificates
requestBody:
description: The certificate upload payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/UploadCertificateRequest"
responses:
"200":
description: Certificate uploaded successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Certificate"
x-oaiMeta:
name: Upload certificate
group: administration
returns: A single [Certificate](/docs/api-reference/certificates/object) object.
examples:
request:
curl: >
curl -X POST https://api.openai.com/v1/organization/certificates \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "My Example Certificate",
"certificate": "-----BEGIN CERTIFICATE-----\\nMIIDeT...\\n-----END CERTIFICATE-----"
}'
response: |
{
"object": "certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
}
/organization/certificates/activate:
post:
summary: >
Activate certificates at the organization level.
You can atomically and idempotently activate up to 10 certificates at a
time.
operationId: activateOrganizationCertificates
tags:
- Certificates
requestBody:
description: The certificate activation payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ToggleCertificatesRequest"
responses:
"200":
description: Certificates activated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListCertificatesResponse"
x-oaiMeta:
name: Activate certificates for organization
group: administration
returns: A list of [Certificate](/docs/api-reference/certificates/object)
objects that were activated.
examples:
request:
curl: >
curl https://api.openai.com/v1/organization/certificates/activate
\
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"data": ["cert_abc", "cert_def"]
}'
response: |
{
"object": "organization.certificate.activation",
"data": [
{
"object": "organization.certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"active": true,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
{
"object": "organization.certificate",
"id": "cert_def",
"name": "My Example Certificate 2",
"active": true,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
],
}
/organization/certificates/deactivate:
post:
summary: >
Deactivate certificates at the organization level.
You can atomically and idempotently deactivate up to 10 certificates at
a time.
operationId: deactivateOrganizationCertificates
tags:
- Certificates
requestBody:
description: The certificate deactivation payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ToggleCertificatesRequest"
responses:
"200":
description: Certificates deactivated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListCertificatesResponse"
x-oaiMeta:
name: Deactivate certificates for organization
group: administration
returns: A list of [Certificate](/docs/api-reference/certificates/object)
objects that were deactivated.
examples:
request:
curl: >
curl
https://api.openai.com/v1/organization/certificates/deactivate \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"data": ["cert_abc", "cert_def"]
}'
response: |
{
"object": "organization.certificate.deactivation",
"data": [
{
"object": "organization.certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"active": false,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
{
"object": "organization.certificate",
"id": "cert_def",
"name": "My Example Certificate 2",
"active": false,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
],
}
/organization/certificates/{certificate_id}:
get:
summary: |
Get a certificate that has been uploaded to the organization.
You can get a certificate regardless of whether it is active or not.
operationId: getCertificate
tags:
- Certificates
parameters:
- name: cert_id
in: path
description: Unique ID of the certificate to retrieve.
required: true
schema:
type: string
- name: include
in: query
description: A list of additional fields to include in the response. Currently
the only supported value is `content` to fetch the PEM content of
the certificate.
required: false
schema:
type: array
items:
type: string
enum:
- content
responses:
"200":
description: Certificate retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Certificate"
x-oaiMeta:
name: Get certificate
group: administration
returns: A single [Certificate](/docs/api-reference/certificates/object) object.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/certificates/cert_abc?include[]=content" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY"
response: >
{
"object": "certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"created_at": 1234567,
"certificate_details": {
"valid_at": 1234567,
"expires_at": 12345678,
"content": "-----BEGIN CERTIFICATE-----MIIDeT...-----END CERTIFICATE-----"
}
}
post:
summary: |
Modify a certificate. Note that only the name can be modified.
operationId: modifyCertificate
tags:
- Certificates
requestBody:
description: The certificate modification payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ModifyCertificateRequest"
responses:
"200":
description: Certificate modified successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Certificate"
x-oaiMeta:
name: Modify certificate
group: administration
returns: The updated [Certificate](/docs/api-reference/certificates/object)
object.
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/organization/certificates/cert_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Renamed Certificate"
}'
response: |
{
"object": "certificate",
"id": "cert_abc",
"name": "Renamed Certificate",
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
}
delete:
summary: |
Delete a certificate from the organization.
The certificate must be inactive for the organization and all projects.
operationId: deleteCertificate
tags:
- Certificates
responses:
"200":
description: Certificate deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteCertificateResponse"
x-oaiMeta:
name: Delete certificate
group: administration
returns: A confirmation object indicating the certificate was deleted.
examples:
request:
curl: >
curl -X DELETE
https://api.openai.com/v1/organization/certificates/cert_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY"
response: |
{
"object": "certificate.deleted",
"id": "cert_abc"
}
/organization/costs:
get:
summary: Get costs details for the organization.
operationId: usage-costs
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently only `1d` is
supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1d
default: 1d
- name: project_ids
in: query
description: Return only costs for these projects.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the costs by the specified fields. Support fields include
`project_id`, `line_item` and any combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- line_item
- name: limit
in: query
description: >
A limit on the number of buckets to be returned. Limit can range
between 1 and 180, and the default is 7.
required: false
schema:
type: integer
default: 7
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Costs data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Costs
group: usage-costs
returns: A list of paginated, time bucketed
[Costs](/docs/api-reference/usage/costs_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/costs?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.costs.result",
"amount": {
"value": 0.06,
"currency": "usd"
},
"line_item": null,
"project_id": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/invites:
get:
summary: Returns a list of invites in the organization.
operationId: list-invites
tags:
- Invites
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
responses:
"200":
description: Invites listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/InviteListResponse"
x-oaiMeta:
name: List invites
group: administration
returns: A list of [Invite](/docs/api-reference/invite/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/invites?after=invite-abc&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "organization.invite",
"id": "invite-abc",
"email": "[email protected]",
"role": "owner",
"status": "accepted",
"invited_at": 1711471533,
"expires_at": 1711471533,
"accepted_at": 1711471533
}
],
"first_id": "invite-abc",
"last_id": "invite-abc",
"has_more": false
}
post:
summary: Create an invite for a user to the organization. The invite must be
accepted by the user before they have access to the organization.
operationId: inviteUser
tags:
- Invites
requestBody:
description: The invite request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/InviteRequest"
responses:
"200":
description: User invited successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Invite"
x-oaiMeta:
name: Create invite
group: administration
returns: The created [Invite](/docs/api-reference/invite/object) object.
examples:
request:
curl: |
curl -X POST https://api.openai.com/v1/organization/invites \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"email": "[email protected]",
"role": "reader",
"projects": [
{
"id": "project-xyz",
"role": "member"
},
{
"id": "project-abc",
"role": "owner"
}
]
}'
response: |
{
"object": "organization.invite",
"id": "invite-def",
"email": "[email protected]",
"role": "reader",
"status": "pending",
"invited_at": 1711471533,
"expires_at": 1711471533,
"accepted_at": null,
"projects": [
{
"id": "project-xyz",
"role": "member"
},
{
"id": "project-abc",
"role": "owner"
}
]
}
/organization/invites/{invite_id}:
get:
summary: Retrieves an invite.
operationId: retrieve-invite
tags:
- Invites
parameters:
- in: path
name: invite_id
required: true
schema:
type: string
description: The ID of the invite to retrieve.
responses:
"200":
description: Invite retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Invite"
x-oaiMeta:
name: Retrieve invite
group: administration
returns: The [Invite](/docs/api-reference/invite/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/invites/invite-abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.invite",
"id": "invite-abc",
"email": "[email protected]",
"role": "owner",
"status": "accepted",
"invited_at": 1711471533,
"expires_at": 1711471533,
"accepted_at": 1711471533
}
delete:
summary: Delete an invite. If the invite has already been accepted, it cannot be
deleted.
operationId: delete-invite
tags:
- Invites
parameters:
- in: path
name: invite_id
required: true
schema:
type: string
description: The ID of the invite to delete.
responses:
"200":
description: Invite deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/InviteDeleteResponse"
x-oaiMeta:
name: Delete invite
group: administration
returns: Confirmation that the invite has been deleted
examples:
request:
curl: >
curl -X DELETE
https://api.openai.com/v1/organization/invites/invite-abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.invite.deleted",
"id": "invite-abc",
"deleted": true
}
/organization/projects:
get:
summary: Returns a list of projects.
operationId: list-projects
tags:
- Projects
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
- name: include_archived
in: query
schema:
type: boolean
default: false
description: If `true` returns all projects including those that have been
`archived`. Archived projects are not included by default.
responses:
"200":
description: Projects listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectListResponse"
x-oaiMeta:
name: List projects
group: administration
returns: A list of [Project](/docs/api-reference/projects/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects?after=proj_abc&limit=20&include_archived=false \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"id": "proj_abc",
"object": "organization.project",
"name": "Project example",
"created_at": 1711471533,
"archived_at": null,
"status": "active"
}
],
"first_id": "proj-abc",
"last_id": "proj-xyz",
"has_more": false
}
post:
summary: Create a new project in the organization. Projects can be created and
archived, but cannot be deleted.
operationId: create-project
tags:
- Projects
requestBody:
description: The project create request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectCreateRequest"
responses:
"200":
description: Project created successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Project"
x-oaiMeta:
name: Create project
group: administration
returns: The created [Project](/docs/api-reference/projects/object) object.
examples:
request:
curl: |
curl -X POST https://api.openai.com/v1/organization/projects \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Project ABC"
}'
response: |
{
"id": "proj_abc",
"object": "organization.project",
"name": "Project ABC",
"created_at": 1711471533,
"archived_at": null,
"status": "active"
}
/organization/projects/{project_id}:
get:
summary: Retrieves a project.
operationId: retrieve-project
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
responses:
"200":
description: Project retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Project"
x-oaiMeta:
name: Retrieve project
group: administration
description: Retrieve a project.
returns: The [Project](/docs/api-reference/projects/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"id": "proj_abc",
"object": "organization.project",
"name": "Project example",
"created_at": 1711471533,
"archived_at": null,
"status": "active"
}
post:
summary: Modifies a project in the organization.
operationId: modify-project
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
requestBody:
description: The project update request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUpdateRequest"
responses:
"200":
description: Project updated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Project"
"400":
description: Error response when updating the default project.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Modify project
group: administration
returns: The updated [Project](/docs/api-reference/projects/object) object.
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/organization/projects/proj_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Project DEF"
}'
/organization/projects/{project_id}/api_keys:
get:
summary: Returns a list of API keys in the project.
operationId: list-project-api-keys
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
responses:
"200":
description: Project API keys listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectApiKeyListResponse"
x-oaiMeta:
name: List project API keys
group: administration
returns: A list of [ProjectApiKey](/docs/api-reference/project-api-keys/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/api_keys?after=key_abc&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "organization.project.api_key",
"redacted_value": "sk-abc...def",
"name": "My API Key",
"created_at": 1711471533,
"last_used_at": 1711471534,
"id": "key_abc",
"owner": {
"type": "user",
"user": {
"object": "organization.project.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
}
}
],
"first_id": "key_abc",
"last_id": "key_xyz",
"has_more": false
}
/organization/projects/{project_id}/api_keys/{key_id}:
get:
summary: Retrieves an API key in the project.
operationId: retrieve-project-api-key
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: key_id
in: path
description: The ID of the API key.
required: true
schema:
type: string
responses:
"200":
description: Project API key retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectApiKey"
x-oaiMeta:
name: Retrieve project API key
group: administration
returns: The [ProjectApiKey](/docs/api-reference/project-api-keys/object) object
matching the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/api_keys/key_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.project.api_key",
"redacted_value": "sk-abc...def",
"name": "My API Key",
"created_at": 1711471533,
"last_used_at": 1711471534,
"id": "key_abc",
"owner": {
"type": "user",
"user": {
"object": "organization.project.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
}
}
delete:
summary: Deletes an API key from the project.
operationId: delete-project-api-key
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: key_id
in: path
description: The ID of the API key.
required: true
schema:
type: string
responses:
"200":
description: Project API key deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectApiKeyDeleteResponse"
"400":
description: Error response for various conditions.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Delete project API key
group: administration
returns: Confirmation of the key's deletion or an error if the key belonged to a
service account
examples:
request:
curl: |
curl -X DELETE https://api.openai.com/v1/organization/projects/proj_abc/api_keys/key_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.project.api_key.deleted",
"id": "key_abc",
"deleted": true
}
/organization/projects/{project_id}/archive:
post:
summary: Archives a project in the organization. Archived projects cannot be
used or updated.
operationId: archive-project
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
responses:
"200":
description: Project archived successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/Project"
x-oaiMeta:
name: Archive project
group: administration
returns: The archived [Project](/docs/api-reference/projects/object) object.
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/organization/projects/proj_abc/archive \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"id": "proj_abc",
"object": "organization.project",
"name": "Project DEF",
"created_at": 1711471533,
"archived_at": 1711471533,
"status": "archived"
}
/organization/projects/{project_id}/certificates:
get:
summary: List certificates for this project.
operationId: listProjectCertificates
tags:
- Certificates
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
responses:
"200":
description: Certificates listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListCertificatesResponse"
x-oaiMeta:
name: List project certificates
group: administration
returns: A list of [Certificate](/docs/api-reference/certificates/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/certificates \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY"
response: |
{
"object": "list",
"data": [
{
"object": "organization.project.certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"active": true,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
],
"first_id": "cert_abc",
"last_id": "cert_abc",
"has_more": false
}
/organization/projects/{project_id}/certificates/activate:
post:
summary: >
Activate certificates at the project level.
You can atomically and idempotently activate up to 10 certificates at a
time.
operationId: activateProjectCertificates
tags:
- Certificates
requestBody:
description: The certificate activation payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ToggleCertificatesRequest"
responses:
"200":
description: Certificates activated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListCertificatesResponse"
x-oaiMeta:
name: Activate certificates for project
group: administration
returns: A list of [Certificate](/docs/api-reference/certificates/object)
objects that were activated.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/certificates/activate \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"data": ["cert_abc", "cert_def"]
}'
response: |
{
"object": "organization.project.certificate.activation",
"data": [
{
"object": "organization.project.certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"active": true,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
{
"object": "organization.project.certificate",
"id": "cert_def",
"name": "My Example Certificate 2",
"active": true,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
],
}
/organization/projects/{project_id}/certificates/deactivate:
post:
summary: >
Deactivate certificates at the project level.
You can atomically and idempotently deactivate up to 10 certificates at
a time.
operationId: deactivateProjectCertificates
tags:
- Certificates
requestBody:
description: The certificate deactivation payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ToggleCertificatesRequest"
responses:
"200":
description: Certificates deactivated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ListCertificatesResponse"
x-oaiMeta:
name: Deactivate certificates for project
group: administration
returns: A list of [Certificate](/docs/api-reference/certificates/object)
objects that were deactivated.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/certificates/deactivate \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"data": ["cert_abc", "cert_def"]
}'
response: |
{
"object": "organization.project.certificate.deactivation",
"data": [
{
"object": "organization.project.certificate",
"id": "cert_abc",
"name": "My Example Certificate",
"active": false,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
{
"object": "organization.project.certificate",
"id": "cert_def",
"name": "My Example Certificate 2",
"active": false,
"created_at": 1234567,
"certificate_details": {
"valid_at": 12345667,
"expires_at": 12345678
}
},
],
}
/organization/projects/{project_id}/rate_limits:
get:
summary: Returns the rate limits per model for a project.
operationId: list-project-rate-limits
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: limit
in: query
description: |
A limit on the number of objects to be returned. The default is 100.
required: false
schema:
type: integer
default: 100
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, beginning with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
required: false
schema:
type: string
responses:
"200":
description: Project rate limits listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectRateLimitListResponse"
x-oaiMeta:
name: List project rate limits
group: administration
returns: A list of
[ProjectRateLimit](/docs/api-reference/project-rate-limits/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/rate_limits?after=rl_xxx&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "project.rate_limit",
"id": "rl-ada",
"model": "ada",
"max_requests_per_1_minute": 600,
"max_tokens_per_1_minute": 150000,
"max_images_per_1_minute": 10
}
],
"first_id": "rl-ada",
"last_id": "rl-ada",
"has_more": false
}
error_response: |
{
"code": 404,
"message": "The project {project_id} was not found"
}
/organization/projects/{project_id}/rate_limits/{rate_limit_id}:
post:
summary: Updates a project rate limit.
operationId: update-project-rate-limits
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: rate_limit_id
in: path
description: The ID of the rate limit.
required: true
schema:
type: string
requestBody:
description: The project rate limit update request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectRateLimitUpdateRequest"
responses:
"200":
description: Project rate limit updated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectRateLimit"
"400":
description: Error response for various conditions.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Modify project rate limit
group: administration
returns: The updated
[ProjectRateLimit](/docs/api-reference/project-rate-limits/object)
object.
examples:
request:
curl: |
curl -X POST https://api.openai.com/v1/organization/projects/proj_abc/rate_limits/rl_xxx \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"max_requests_per_1_minute": 500
}'
response: |
{
"object": "project.rate_limit",
"id": "rl-ada",
"model": "ada",
"max_requests_per_1_minute": 600,
"max_tokens_per_1_minute": 150000,
"max_images_per_1_minute": 10
}
error_response: |
{
"code": 404,
"message": "The project {project_id} was not found"
}
/organization/projects/{project_id}/service_accounts:
get:
summary: Returns a list of service accounts in the project.
operationId: list-project-service-accounts
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
responses:
"200":
description: Project service accounts listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectServiceAccountListResponse"
"400":
description: Error response when project is archived.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: List project service accounts
group: administration
returns: A list of
[ProjectServiceAccount](/docs/api-reference/project-service-accounts/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/service_accounts?after=custom_id&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "organization.project.service_account",
"id": "svc_acct_abc",
"name": "Service Account",
"role": "owner",
"created_at": 1711471533
}
],
"first_id": "svc_acct_abc",
"last_id": "svc_acct_xyz",
"has_more": false
}
post:
summary: Creates a new service account in the project. This also returns an
unredacted API key for the service account.
operationId: create-project-service-account
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
requestBody:
description: The project service account create request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectServiceAccountCreateRequest"
responses:
"200":
description: Project service account created successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectServiceAccountCreateResponse"
"400":
description: Error response when project is archived.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Create project service account
group: administration
returns: The created
[ProjectServiceAccount](/docs/api-reference/project-service-accounts/object)
object.
examples:
request:
curl: |
curl -X POST https://api.openai.com/v1/organization/projects/proj_abc/service_accounts \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"name": "Production App"
}'
response: |
{
"object": "organization.project.service_account",
"id": "svc_acct_abc",
"name": "Production App",
"role": "member",
"created_at": 1711471533,
"api_key": {
"object": "organization.project.service_account.api_key",
"value": "sk-abcdefghijklmnop123",
"name": "Secret Key",
"created_at": 1711471533,
"id": "key_abc"
}
}
/organization/projects/{project_id}/service_accounts/{service_account_id}:
get:
summary: Retrieves a service account in the project.
operationId: retrieve-project-service-account
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: service_account_id
in: path
description: The ID of the service account.
required: true
schema:
type: string
responses:
"200":
description: Project service account retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectServiceAccount"
x-oaiMeta:
name: Retrieve project service account
group: administration
returns: The
[ProjectServiceAccount](/docs/api-reference/project-service-accounts/object)
object matching the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/service_accounts/svc_acct_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.project.service_account",
"id": "svc_acct_abc",
"name": "Service Account",
"role": "owner",
"created_at": 1711471533
}
delete:
summary: Deletes a service account from the project.
operationId: delete-project-service-account
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: service_account_id
in: path
description: The ID of the service account.
required: true
schema:
type: string
responses:
"200":
description: Project service account deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectServiceAccountDeleteResponse"
x-oaiMeta:
name: Delete project service account
group: administration
returns: Confirmation of service account being deleted, or an error in case of
an archived project, which has no service accounts
examples:
request:
curl: |
curl -X DELETE https://api.openai.com/v1/organization/projects/proj_abc/service_accounts/svc_acct_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.project.service_account.deleted",
"id": "svc_acct_abc",
"deleted": true
}
/organization/projects/{project_id}/users:
get:
summary: Returns a list of users in the project.
operationId: list-project-users
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
responses:
"200":
description: Project users listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUserListResponse"
"400":
description: Error response when project is archived.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: List project users
group: administration
returns: A list of [ProjectUser](/docs/api-reference/project-users/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/users?after=user_abc&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "organization.project.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
],
"first_id": "user-abc",
"last_id": "user-xyz",
"has_more": false
}
post:
summary: Adds a user to the project. Users must already be members of the
organization to be added to a project.
operationId: create-project-user
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
tags:
- Projects
requestBody:
description: The project user create request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUserCreateRequest"
responses:
"200":
description: User added to project successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUser"
"400":
description: Error response for various conditions.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Create project user
group: administration
returns: The created [ProjectUser](/docs/api-reference/project-users/object)
object.
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/organization/projects/proj_abc/users \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"user_id": "user_abc",
"role": "member"
}'
response: |
{
"object": "organization.project.user",
"id": "user_abc",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
/organization/projects/{project_id}/users/{user_id}:
get:
summary: Retrieves a user in the project.
operationId: retrieve-project-user
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: user_id
in: path
description: The ID of the user.
required: true
schema:
type: string
responses:
"200":
description: Project user retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUser"
x-oaiMeta:
name: Retrieve project user
group: administration
returns: The [ProjectUser](/docs/api-reference/project-users/object) object
matching the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/projects/proj_abc/users/user_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.project.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
post:
summary: Modifies a user's role in the project.
operationId: modify-project-user
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: user_id
in: path
description: The ID of the user.
required: true
schema:
type: string
requestBody:
description: The project user update request payload.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUserUpdateRequest"
responses:
"200":
description: Project user's role updated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUser"
"400":
description: Error response for various conditions.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Modify project user
group: administration
returns: The updated [ProjectUser](/docs/api-reference/project-users/object)
object.
examples:
request:
curl: |
curl -X POST https://api.openai.com/v1/organization/projects/proj_abc/users/user_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"role": "owner"
}'
response: |
{
"object": "organization.project.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
delete:
summary: Deletes a user from the project.
operationId: delete-project-user
tags:
- Projects
parameters:
- name: project_id
in: path
description: The ID of the project.
required: true
schema:
type: string
- name: user_id
in: path
description: The ID of the user.
required: true
schema:
type: string
responses:
"200":
description: Project user deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/ProjectUserDeleteResponse"
"400":
description: Error response for various conditions.
content:
application/json:
schema:
$ref: "#/components/schemas/ErrorResponse"
x-oaiMeta:
name: Delete project user
group: administration
returns: Confirmation that project has been deleted or an error in case of an
archived project, which has no users
examples:
request:
curl: |
curl -X DELETE https://api.openai.com/v1/organization/projects/proj_abc/users/user_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.project.user.deleted",
"id": "user_abc",
"deleted": true
}
/organization/usage/audio_speeches:
get:
summary: Get audio speeches usage details for the organization.
operationId: usage-audio-speeches
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: user_ids
in: query
description: Return only usage for these users.
required: false
schema:
type: array
items:
type: string
- name: api_key_ids
in: query
description: Return only usage for these API keys.
required: false
schema:
type: array
items:
type: string
- name: models
in: query
description: Return only usage for these models.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`, `user_id`, `api_key_id`, `model` or any
combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- user_id
- api_key_id
- model
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Audio speeches
group: usage-audio-speeches
returns: A list of paginated, time bucketed [Audio speeches
usage](/docs/api-reference/usage/audio_speeches_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/audio_speeches?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.audio_speeches.result",
"characters": 45,
"num_model_requests": 1,
"project_id": null,
"user_id": null,
"api_key_id": null,
"model": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/usage/audio_transcriptions:
get:
summary: Get audio transcriptions usage details for the organization.
operationId: usage-audio-transcriptions
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: user_ids
in: query
description: Return only usage for these users.
required: false
schema:
type: array
items:
type: string
- name: api_key_ids
in: query
description: Return only usage for these API keys.
required: false
schema:
type: array
items:
type: string
- name: models
in: query
description: Return only usage for these models.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`, `user_id`, `api_key_id`, `model` or any
combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- user_id
- api_key_id
- model
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Audio transcriptions
group: usage-audio-transcriptions
returns: A list of paginated, time bucketed [Audio transcriptions
usage](/docs/api-reference/usage/audio_transcriptions_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/audio_transcriptions?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.audio_transcriptions.result",
"seconds": 20,
"num_model_requests": 1,
"project_id": null,
"user_id": null,
"api_key_id": null,
"model": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/usage/code_interpreter_sessions:
get:
summary: Get code interpreter sessions usage details for the organization.
operationId: usage-code-interpreter-sessions
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Code interpreter sessions
group: usage-code-interpreter-sessions
returns: A list of paginated, time bucketed [Code interpreter sessions
usage](/docs/api-reference/usage/code_interpreter_sessions_object)
objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/code_interpreter_sessions?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.code_interpreter_sessions.result",
"num_sessions": 1,
"project_id": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/usage/completions:
get:
summary: Get completions usage details for the organization.
operationId: usage-completions
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: user_ids
in: query
description: Return only usage for these users.
required: false
schema:
type: array
items:
type: string
- name: api_key_ids
in: query
description: Return only usage for these API keys.
required: false
schema:
type: array
items:
type: string
- name: models
in: query
description: Return only usage for these models.
required: false
schema:
type: array
items:
type: string
- name: batch
in: query
description: >
If `true`, return batch jobs only. If `false`, return non-batch jobs
only. By default, return both.
required: false
schema:
type: boolean
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`, `user_id`, `api_key_id`, `model`, `batch` or
any combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- user_id
- api_key_id
- model
- batch
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Completions
group: usage-completions
returns: A list of paginated, time bucketed [Completions
usage](/docs/api-reference/usage/completions_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/completions?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.completions.result",
"input_tokens": 1000,
"output_tokens": 500,
"input_cached_tokens": 800,
"input_audio_tokens": 0,
"output_audio_tokens": 0,
"num_model_requests": 5,
"project_id": null,
"user_id": null,
"api_key_id": null,
"model": null,
"batch": null
}
]
}
],
"has_more": true,
"next_page": "page_AAAAAGdGxdEiJdKOAAAAAGcqsYA="
}
/organization/usage/embeddings:
get:
summary: Get embeddings usage details for the organization.
operationId: usage-embeddings
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: user_ids
in: query
description: Return only usage for these users.
required: false
schema:
type: array
items:
type: string
- name: api_key_ids
in: query
description: Return only usage for these API keys.
required: false
schema:
type: array
items:
type: string
- name: models
in: query
description: Return only usage for these models.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`, `user_id`, `api_key_id`, `model` or any
combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- user_id
- api_key_id
- model
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Embeddings
group: usage-embeddings
returns: A list of paginated, time bucketed [Embeddings
usage](/docs/api-reference/usage/embeddings_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/embeddings?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.embeddings.result",
"input_tokens": 16,
"num_model_requests": 2,
"project_id": null,
"user_id": null,
"api_key_id": null,
"model": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/usage/images:
get:
summary: Get images usage details for the organization.
operationId: usage-images
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: sources
in: query
description: Return only usages for these sources. Possible values are
`image.generation`, `image.edit`, `image.variation` or any
combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- image.generation
- image.edit
- image.variation
- name: sizes
in: query
description: Return only usages for these image sizes. Possible values are
`256x256`, `512x512`, `1024x1024`, `1792x1792`, `1024x1792` or any
combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- 256x256
- 512x512
- 1024x1024
- 1792x1792
- 1024x1792
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: user_ids
in: query
description: Return only usage for these users.
required: false
schema:
type: array
items:
type: string
- name: api_key_ids
in: query
description: Return only usage for these API keys.
required: false
schema:
type: array
items:
type: string
- name: models
in: query
description: Return only usage for these models.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`, `user_id`, `api_key_id`, `model`, `size`,
`source` or any combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- user_id
- api_key_id
- model
- size
- source
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Images
group: usage-images
returns: A list of paginated, time bucketed [Images
usage](/docs/api-reference/usage/images_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/images?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.images.result",
"images": 2,
"num_model_requests": 2,
"size": null,
"source": null,
"project_id": null,
"user_id": null,
"api_key_id": null,
"model": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/usage/moderations:
get:
summary: Get moderations usage details for the organization.
operationId: usage-moderations
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: user_ids
in: query
description: Return only usage for these users.
required: false
schema:
type: array
items:
type: string
- name: api_key_ids
in: query
description: Return only usage for these API keys.
required: false
schema:
type: array
items:
type: string
- name: models
in: query
description: Return only usage for these models.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`, `user_id`, `api_key_id`, `model` or any
combination of them.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- user_id
- api_key_id
- model
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Moderations
group: usage-moderations
returns: A list of paginated, time bucketed [Moderations
usage](/docs/api-reference/usage/moderations_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/moderations?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.moderations.result",
"input_tokens": 16,
"num_model_requests": 2,
"project_id": null,
"user_id": null,
"api_key_id": null,
"model": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/usage/vector_stores:
get:
summary: Get vector stores usage details for the organization.
operationId: usage-vector-stores
tags:
- Usage
parameters:
- name: start_time
in: query
description: Start time (Unix seconds) of the query time range, inclusive.
required: true
schema:
type: integer
- name: end_time
in: query
description: End time (Unix seconds) of the query time range, exclusive.
required: false
schema:
type: integer
- name: bucket_width
in: query
description: Width of each time bucket in response. Currently `1m`, `1h` and
`1d` are supported, default to `1d`.
required: false
schema:
type: string
enum:
- 1m
- 1h
- 1d
default: 1d
- name: project_ids
in: query
description: Return only usage for these projects.
required: false
schema:
type: array
items:
type: string
- name: group_by
in: query
description: Group the usage data by the specified fields. Support fields
include `project_id`.
required: false
schema:
type: array
items:
type: string
enum:
- project_id
- name: limit
in: query
description: |
Specifies the number of buckets to return.
- `bucket_width=1d`: default: 7, max: 31
- `bucket_width=1h`: default: 24, max: 168
- `bucket_width=1m`: default: 60, max: 1440
required: false
schema:
type: integer
- name: page
in: query
description: A cursor for use in pagination. Corresponding to the `next_page`
field from the previous response.
schema:
type: string
responses:
"200":
description: Usage data retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UsageResponse"
x-oaiMeta:
name: Vector stores
group: usage-vector-stores
returns: A list of paginated, time bucketed [Vector stores
usage](/docs/api-reference/usage/vector_stores_object) objects.
examples:
request:
curl: |
curl "https://api.openai.com/v1/organization/usage/vector_stores?start_time=1730419200&limit=1" \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: >
{
"object": "page",
"data": [
{
"object": "bucket",
"start_time": 1730419200,
"end_time": 1730505600,
"results": [
{
"object": "organization.usage.vector_stores.result",
"usage_bytes": 1024,
"project_id": null
}
]
}
],
"has_more": false,
"next_page": null
}
/organization/users:
get:
summary: Lists all of the users in the organization.
operationId: list-users
tags:
- Users
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
required: false
schema:
type: string
- name: emails
in: query
description: Filter by the email address of users.
required: false
schema:
type: array
items:
type: string
responses:
"200":
description: Users listed successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UserListResponse"
x-oaiMeta:
name: List users
group: administration
returns: A list of [User](/docs/api-reference/users/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/users?after=user_abc&limit=20 \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "list",
"data": [
{
"object": "organization.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
],
"first_id": "user-abc",
"last_id": "user-xyz",
"has_more": false
}
/organization/users/{user_id}:
get:
summary: Retrieves a user by their identifier.
operationId: retrieve-user
tags:
- Users
parameters:
- name: user_id
in: path
description: The ID of the user.
required: true
schema:
type: string
responses:
"200":
description: User retrieved successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/User"
x-oaiMeta:
name: Retrieve user
group: administration
returns: The [User](/docs/api-reference/users/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/organization/users/user_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
post:
summary: Modifies a user's role in the organization.
operationId: modify-user
tags:
- Users
parameters:
- name: user_id
in: path
description: The ID of the user.
required: true
schema:
type: string
requestBody:
description: The new user role to modify. This must be one of `owner` or `member`.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/UserRoleUpdateRequest"
responses:
"200":
description: User role updated successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/User"
x-oaiMeta:
name: Modify user
group: administration
returns: The updated [User](/docs/api-reference/users/object) object.
examples:
request:
curl: >
curl -X POST https://api.openai.com/v1/organization/users/user_abc
\
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json" \
-d '{
"role": "owner"
}'
response: |
{
"object": "organization.user",
"id": "user_abc",
"name": "First Last",
"email": "[email protected]",
"role": "owner",
"added_at": 1711471533
}
delete:
summary: Deletes a user from the organization.
operationId: delete-user
tags:
- Users
parameters:
- name: user_id
in: path
description: The ID of the user.
required: true
schema:
type: string
responses:
"200":
description: User deleted successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/UserDeleteResponse"
x-oaiMeta:
name: Delete user
group: administration
returns: Confirmation of the deleted user
examples:
request:
curl: >
curl -X DELETE
https://api.openai.com/v1/organization/users/user_abc \
-H "Authorization: Bearer $OPENAI_ADMIN_KEY" \
-H "Content-Type: application/json"
response: |
{
"object": "organization.user.deleted",
"id": "user_abc",
"deleted": true
}
/realtime/sessions:
post:
summary: >
Create an ephemeral API token for use in client-side applications with
the
Realtime API. Can be configured with the same session parameters as the
`session.update` client event.
It responds with a session object, plus a `client_secret` key which
contains
a usable ephemeral API token that can be used to authenticate browser
clients
for the Realtime API.
operationId: create-realtime-session
tags:
- Realtime
requestBody:
description: Create an ephemeral API key with the given session configuration.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/RealtimeSessionCreateRequest"
responses:
"200":
description: Session created successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/RealtimeSessionCreateResponse"
x-oaiMeta:
name: Create session
group: realtime
returns: The created Realtime session object, plus an ephemeral key
examples:
request:
curl: |
curl -X POST https://api.openai.com/v1/realtime/sessions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-realtime-preview",
"modalities": ["audio", "text"],
"instructions": "You are a friendly assistant."
}'
response: |
{
"id": "sess_001",
"object": "realtime.session",
"model": "gpt-4o-realtime-preview",
"modalities": ["audio", "text"],
"instructions": "You are a friendly assistant.",
"voice": "alloy",
"input_audio_format": "pcm16",
"output_audio_format": "pcm16",
"input_audio_transcription": {
"model": "whisper-1"
},
"turn_detection": null,
"tools": [],
"tool_choice": "none",
"temperature": 0.7,
"max_response_output_tokens": 200,
"client_secret": {
"value": "ek_abc123",
"expires_at": 1234567890
}
}
/realtime/transcription_sessions:
post:
summary: >
Create an ephemeral API token for use in client-side applications with
the
Realtime API specifically for realtime transcriptions.
Can be configured with the same session parameters as the
`transcription_session.update` client event.
It responds with a session object, plus a `client_secret` key which
contains
a usable ephemeral API token that can be used to authenticate browser
clients
for the Realtime API.
operationId: create-realtime-transcription-session
tags:
- Realtime
requestBody:
description: Create an ephemeral API key with the given session configuration.
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/RealtimeTranscriptionSessionCreateRequest"
responses:
"200":
description: Session created successfully.
content:
application/json:
schema:
$ref: "#/components/schemas/RealtimeTranscriptionSessionCreateResponse"
x-oaiMeta:
name: Create transcription session
group: realtime
returns: The created [Realtime transcription session
object](/docs/api-reference/realtime-sessions/transcription_session_object),
plus an ephemeral key
examples:
request:
curl: >
curl -X POST
https://api.openai.com/v1/realtime/transcription_sessions \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{}'
response: |
{
"id": "sess_BBwZc7cFV3XizEyKGDCGL",
"object": "realtime.transcription_session",
"modalities": ["audio", "text"],
"turn_detection": {
"type": "server_vad",
"threshold": 0.5,
"prefix_padding_ms": 300,
"silence_duration_ms": 200
},
"input_audio_format": "pcm16",
"input_audio_transcription": {
"model": "gpt-4o-transcribe",
"language": null,
"prompt": ""
},
"client_secret": null
}
/responses:
post:
operationId: createResponse
tags:
- Responses
summary: >
Creates a model response. Provide [text](/docs/guides/text) or
[image](/docs/guides/images) inputs to generate
[text](/docs/guides/text)
or [JSON](/docs/guides/structured-outputs) outputs. Have the model call
your own [custom code](/docs/guides/function-calling) or use built-in
[tools](/docs/guides/tools) like [web
search](/docs/guides/tools-web-search)
or [file search](/docs/guides/tools-file-search) to use your own data
as input for the model's response.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateResponse"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Response"
text/event-stream:
schema:
$ref: "#/components/schemas/ResponseStreamEvent"
x-oaiMeta:
name: Create a model response
group: responses
returns: |
Returns a [Response](/docs/api-reference/responses/object) object.
path: create
examples:
- title: Text input
request:
curl: >
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input": "Tell me a three sentence bedtime story about a unicorn."
}'
javascript: >
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "gpt-4.1",
input: "Tell me a three sentence bedtime story about a unicorn."
});
console.log(response);
python: >
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4.1",
input="Tell me a three sentence bedtime story about a unicorn."
)
print(response)
csharp: >
using System;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
OpenAIResponse response = client.CreateResponse("Tell me a three
sentence bedtime story about a unicorn.");
Console.WriteLine(response.GetOutputText());
response: >
{
"id": "resp_67ccd2bed1ec8190b14f964abc0542670bb6a6b452d3795b",
"object": "response",
"created_at": 1741476542,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "message",
"id": "msg_67ccd2bf17f0819081ff3bb2cf6508e60bb6a6b452d3795b",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "In a peaceful grove beneath a silver moon, a unicorn named Lumina discovered a hidden pool that reflected the stars. As she dipped her horn into the water, the pool began to shimmer, revealing a pathway to a magical realm of endless night skies. Filled with wonder, Lumina whispered a wish for all who dream to find their own hidden magic, and as she glanced back, her hoofprints sparkled like stardust.",
"annotations": []
}
]
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 36,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 87,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 123
},
"user": null,
"metadata": {}
}
- title: Image input
request:
curl: >
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input": [
{
"role": "user",
"content": [
{"type": "input_text", "text": "what is in this image?"},
{
"type": "input_image",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
]
}
]
}'
javascript: >
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "gpt-4.1",
input: [
{
role: "user",
content: [
{ type: "input_text", text: "what is in this image?" },
{
type: "input_image",
image_url:
"https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
],
},
],
});
console.log(response);
python: >
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4.1",
input=[
{
"role": "user",
"content": [
{ "type": "input_text", "text": "what is in this image?" },
{
"type": "input_image",
"image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"
}
]
}
]
)
print(response)
csharp: >
using System;
using System.Collections.Generic;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
List<ResponseItem> inputItems =
[
ResponseItem.CreateUserMessageItem(
[
ResponseContentPart.CreateInputTextPart("What is in this image?"),
ResponseContentPart.CreateInputImagePart(new Uri("https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg"))
]
)
];
OpenAIResponse response = client.CreateResponse(inputItems);
Console.WriteLine(response.GetOutputText());
response: >
{
"id": "resp_67ccd3a9da748190baa7f1570fe91ac604becb25c45c1d41",
"object": "response",
"created_at": 1741476777,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "message",
"id": "msg_67ccd3acc8d48190a77525dc6de64b4104becb25c45c1d41",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "The image depicts a scenic landscape with a wooden boardwalk or pathway leading through lush, green grass under a blue sky with some clouds. The setting suggests a peaceful natural area, possibly a park or nature reserve. There are trees and shrubs in the background.",
"annotations": []
}
]
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 328,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 52,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 380
},
"user": null,
"metadata": {}
}
- title: Web search
request:
curl: |
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"tools": [{ "type": "web_search_preview" }],
"input": "What was a positive news story from today?"
}'
javascript: |
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "gpt-4.1",
tools: [{ type: "web_search_preview" }],
input: "What was a positive news story from today?",
});
console.log(response);
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4.1",
tools=[{ "type": "web_search_preview" }],
input="What was a positive news story from today?",
)
print(response)
csharp: >
using System;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
string userInputText = "What was a positive news story from
today?";
ResponseCreationOptions options = new()
{
Tools =
{
ResponseTool.CreateWebSearchTool()
},
};
OpenAIResponse response = client.CreateResponse(userInputText,
options);
Console.WriteLine(response.GetOutputText());
response: >
{
"id": "resp_67ccf18ef5fc8190b16dbee19bc54e5f087bb177ab789d5c",
"object": "response",
"created_at": 1741484430,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "web_search_call",
"id": "ws_67ccf18f64008190a39b619f4c8455ef087bb177ab789d5c",
"status": "completed"
},
{
"type": "message",
"id": "msg_67ccf190ca3881909d433c50b1f6357e087bb177ab789d5c",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "As of today, March 9, 2025, one notable positive news story...",
"annotations": [
{
"type": "url_citation",
"start_index": 442,
"end_index": 557,
"url": "https://.../?utm_source=chatgpt.com",
"title": "..."
},
{
"type": "url_citation",
"start_index": 962,
"end_index": 1077,
"url": "https://.../?utm_source=chatgpt.com",
"title": "..."
},
{
"type": "url_citation",
"start_index": 1336,
"end_index": 1451,
"url": "https://.../?utm_source=chatgpt.com",
"title": "..."
}
]
}
]
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [
{
"type": "web_search_preview",
"domains": [],
"search_context_size": "medium",
"user_location": {
"type": "approximate",
"city": null,
"country": "US",
"region": null,
"timezone": null
}
}
],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 328,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 356,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 684
},
"user": null,
"metadata": {}
}
- title: File search
request:
curl: >
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"tools": [{
"type": "file_search",
"vector_store_ids": ["vs_1234567890"],
"max_num_results": 20
}],
"input": "What are the attributes of an ancient brown dragon?"
}'
javascript: >
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "gpt-4.1",
tools: [{
type: "file_search",
vector_store_ids: ["vs_1234567890"],
max_num_results: 20
}],
input: "What are the attributes of an ancient brown dragon?",
});
console.log(response);
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4.1",
tools=[{
"type": "file_search",
"vector_store_ids": ["vs_1234567890"],
"max_num_results": 20
}],
input="What are the attributes of an ancient brown dragon?",
)
print(response)
csharp: >
using System;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
string userInputText = "What are the attributes of an ancient
brown dragon?";
ResponseCreationOptions options = new()
{
Tools =
{
ResponseTool.CreateFileSearchTool(
vectorStoreIds: ["vs_1234567890"],
maxResultCount: 20
)
},
};
OpenAIResponse response = client.CreateResponse(userInputText,
options);
Console.WriteLine(response.GetOutputText());
response: >
{
"id": "resp_67ccf4c55fc48190b71bd0463ad3306d09504fb6872380d7",
"object": "response",
"created_at": 1741485253,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "file_search_call",
"id": "fs_67ccf4c63cd08190887ef6464ba5681609504fb6872380d7",
"status": "completed",
"queries": [
"attributes of an ancient brown dragon"
],
"results": null
},
{
"type": "message",
"id": "msg_67ccf4c93e5c81909d595b369351a9d309504fb6872380d7",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "The attributes of an ancient brown dragon include...",
"annotations": [
{
"type": "file_citation",
"index": 320,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 576,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 815,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 815,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 1030,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 1030,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 1156,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
},
{
"type": "file_citation",
"index": 1225,
"file_id": "file-4wDz5b167pAf72nx1h9eiN",
"filename": "dragons.pdf"
}
]
}
]
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [
{
"type": "file_search",
"filters": null,
"max_num_results": 20,
"ranking_options": {
"ranker": "auto",
"score_threshold": 0.0
},
"vector_store_ids": [
"vs_1234567890"
]
}
],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 18307,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 348,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 18655
},
"user": null,
"metadata": {}
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"instructions": "You are a helpful assistant.",
"input": "Hello!",
"stream": true
}'
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="gpt-4.1",
instructions="You are a helpful assistant.",
input="Hello!",
stream=True
)
for event in response:
print(event)
javascript: |
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "gpt-4.1",
instructions: "You are a helpful assistant.",
input: "Hello!",
stream: true,
});
for await (const event of response) {
console.log(event);
}
csharp: >
using System;
using System.ClientModel;
using System.Threading.Tasks;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
string userInputText = "Hello!";
ResponseCreationOptions options = new()
{
Instructions = "You are a helpful assistant.",
};
AsyncCollectionResult<StreamingResponseUpdate> responseUpdates =
client.CreateResponseStreamingAsync(userInputText, options);
await foreach (StreamingResponseUpdate responseUpdate in
responseUpdates)
{
if (responseUpdate is StreamingResponseOutputTextDeltaUpdate outputTextDeltaUpdate)
{
Console.Write(outputTextDeltaUpdate.Delta);
}
}
response: |
event: response.created
data: {"type":"response.created","response":{"id":"resp_67c9fdcecf488190bdd9a0409de3a1ec07b8b0ad4e5eb654","object":"response","created_at":1741290958,"status":"in_progress","error":null,"incomplete_details":null,"instructions":"You are a helpful assistant.","max_output_tokens":null,"model":"gpt-4.1-2025-04-14","output":[],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.in_progress
data: {"type":"response.in_progress","response":{"id":"resp_67c9fdcecf488190bdd9a0409de3a1ec07b8b0ad4e5eb654","object":"response","created_at":1741290958,"status":"in_progress","error":null,"incomplete_details":null,"instructions":"You are a helpful assistant.","max_output_tokens":null,"model":"gpt-4.1-2025-04-14","output":[],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":null,"user":null,"metadata":{}}}
event: response.output_item.added
data: {"type":"response.output_item.added","output_index":0,"item":{"id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","type":"message","status":"in_progress","role":"assistant","content":[]}}
event: response.content_part.added
data: {"type":"response.content_part.added","item_id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","output_index":0,"content_index":0,"part":{"type":"output_text","text":"","annotations":[]}}
event: response.output_text.delta
data: {"type":"response.output_text.delta","item_id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","output_index":0,"content_index":0,"delta":"Hi"}
...
event: response.output_text.done
data: {"type":"response.output_text.done","item_id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","output_index":0,"content_index":0,"text":"Hi there! How can I assist you today?"}
event: response.content_part.done
data: {"type":"response.content_part.done","item_id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","output_index":0,"content_index":0,"part":{"type":"output_text","text":"Hi there! How can I assist you today?","annotations":[]}}
event: response.output_item.done
data: {"type":"response.output_item.done","output_index":0,"item":{"id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","type":"message","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Hi there! How can I assist you today?","annotations":[]}]}}
event: response.completed
data: {"type":"response.completed","response":{"id":"resp_67c9fdcecf488190bdd9a0409de3a1ec07b8b0ad4e5eb654","object":"response","created_at":1741290958,"status":"completed","error":null,"incomplete_details":null,"instructions":"You are a helpful assistant.","max_output_tokens":null,"model":"gpt-4.1-2025-04-14","output":[{"id":"msg_67c9fdcf37fc8190ba82116e33fb28c507b8b0ad4e5eb654","type":"message","status":"completed","role":"assistant","content":[{"type":"output_text","text":"Hi there! How can I assist you today?","annotations":[]}]}],"parallel_tool_calls":true,"previous_response_id":null,"reasoning":{"effort":null,"summary":null},"store":true,"temperature":1.0,"text":{"format":{"type":"text"}},"tool_choice":"auto","tools":[],"top_p":1.0,"truncation":"disabled","usage":{"input_tokens":37,"output_tokens":11,"output_tokens_details":{"reasoning_tokens":0},"total_tokens":48},"user":null,"metadata":{}}}
- title: Functions
request:
curl: >
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "gpt-4.1",
"input": "What is the weather like in Boston today?",
"tools": [
{
"type": "function",
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location", "unit"]
}
}
],
"tool_choice": "auto"
}'
python: >
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location", "unit"],
}
}
]
response = client.responses.create(
model="gpt-4.1",
tools=tools,
input="What is the weather like in Boston today?",
tool_choice="auto"
)
print(response)
javascript: >
import OpenAI from "openai";
const openai = new OpenAI();
const tools = [
{
type: "function",
name: "get_current_weather",
description: "Get the current weather in a given location",
parameters: {
type: "object",
properties: {
location: {
type: "string",
description: "The city and state, e.g. San Francisco, CA",
},
unit: { type: "string", enum: ["celsius", "fahrenheit"] },
},
required: ["location", "unit"],
},
},
];
const response = await openai.responses.create({
model: "gpt-4.1",
tools: tools,
input: "What is the weather like in Boston today?",
tool_choice: "auto",
});
console.log(response);
csharp: >
using System;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "gpt-4.1",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
ResponseTool getCurrentWeatherFunctionTool =
ResponseTool.CreateFunctionTool(
functionName: "get_current_weather",
functionDescription: "Get the current weather in a given location",
functionParameters: BinaryData.FromString("""
{
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location", "unit"]
}
"""
)
);
string userInputText = "What is the weather like in Boston
today?";
ResponseCreationOptions options = new()
{
Tools =
{
getCurrentWeatherFunctionTool
},
ToolChoice = ResponseToolChoice.CreateAutoChoice(),
};
OpenAIResponse response = client.CreateResponse(userInputText,
options);
response: >
{
"id": "resp_67ca09c5efe0819096d0511c92b8c890096610f474011cc0",
"object": "response",
"created_at": 1741294021,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4.1-2025-04-14",
"output": [
{
"type": "function_call",
"id": "fc_67ca09c6bedc8190a7abfec07b1a1332096610f474011cc0",
"call_id": "call_unLAR8MvFNptuiZK6K6HCy5k",
"name": "get_current_weather",
"arguments": "{\"location\":\"Boston, MA\",\"unit\":\"celsius\"}",
"status": "completed"
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [
{
"type": "function",
"description": "Get the current weather in a given location",
"name": "get_current_weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": [
"celsius",
"fahrenheit"
]
}
},
"required": [
"location",
"unit"
]
},
"strict": true
}
],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 291,
"output_tokens": 23,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 314
},
"user": null,
"metadata": {}
}
- title: Reasoning
request:
curl: |
curl https://api.openai.com/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"model": "o3-mini",
"input": "How much wood would a woodchuck chuck?",
"reasoning": {
"effort": "high"
}
}'
javascript: |
import OpenAI from "openai";
const openai = new OpenAI();
const response = await openai.responses.create({
model: "o3-mini",
input: "How much wood would a woodchuck chuck?",
reasoning: {
effort: "high"
}
});
console.log(response);
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.create(
model="o3-mini",
input="How much wood would a woodchuck chuck?",
reasoning={
"effort": "high"
}
)
print(response)
csharp: >
using System;
using OpenAI.Responses;
OpenAIResponseClient client = new(
model: "o3-mini",
apiKey: Environment.GetEnvironmentVariable("OPENAI_API_KEY")
);
string userInputText = "How much wood would a woodchuck chuck?";
ResponseCreationOptions options = new()
{
ReasoningOptions = new()
{
ReasoningEffortLevel = ResponseReasoningEffortLevel.High,
},
};
OpenAIResponse response = client.CreateResponse(userInputText,
options);
Console.WriteLine(response.GetOutputText());
response: >
{
"id": "resp_67ccd7eca01881908ff0b5146584e408072912b2993db808",
"object": "response",
"created_at": 1741477868,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "o1-2024-12-17",
"output": [
{
"type": "message",
"id": "msg_67ccd7f7b5848190a6f3e95d809f6b44072912b2993db808",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "The classic tongue twister...",
"annotations": []
}
]
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": "high",
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 81,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 1035,
"output_tokens_details": {
"reasoning_tokens": 832
},
"total_tokens": 1116
},
"user": null,
"metadata": {}
}
/responses/{response_id}:
get:
operationId: getResponse
tags:
- Responses
summary: |
Retrieves a model response with the given ID.
parameters:
- in: path
name: response_id
required: true
schema:
type: string
example: resp_677efb5139a88190b512bc3fef8e535d
description: The ID of the response to retrieve.
- in: query
name: include
schema:
type: array
items:
$ref: "#/components/schemas/Includable"
description: |
Additional fields to include in the response. See the `include`
parameter for Response creation above for more information.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Response"
x-oaiMeta:
name: Get a model response
group: responses
returns: >
The [Response](/docs/api-reference/responses/object) object matching
the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/responses/resp_123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY"
javascript: |
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.retrieve("resp_123");
console.log(response);
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.retrieve("resp_123")
print(response)
response: >
{
"id": "resp_67cb71b351908190a308f3859487620d06981a8637e6bc44",
"object": "response",
"created_at": 1741386163,
"status": "completed",
"error": null,
"incomplete_details": null,
"instructions": null,
"max_output_tokens": null,
"model": "gpt-4o-2024-08-06",
"output": [
{
"type": "message",
"id": "msg_67cb71b3c2b0819084d481baaaf148f206981a8637e6bc44",
"status": "completed",
"role": "assistant",
"content": [
{
"type": "output_text",
"text": "Silent circuits hum, \nThoughts emerge in data streams— \nDigital dawn breaks.",
"annotations": []
}
]
}
],
"parallel_tool_calls": true,
"previous_response_id": null,
"reasoning": {
"effort": null,
"summary": null
},
"store": true,
"temperature": 1.0,
"text": {
"format": {
"type": "text"
}
},
"tool_choice": "auto",
"tools": [],
"top_p": 1.0,
"truncation": "disabled",
"usage": {
"input_tokens": 32,
"input_tokens_details": {
"cached_tokens": 0
},
"output_tokens": 18,
"output_tokens_details": {
"reasoning_tokens": 0
},
"total_tokens": 50
},
"user": null,
"metadata": {}
}
delete:
operationId: deleteResponse
tags:
- Responses
summary: |
Deletes a model response with the given ID.
parameters:
- in: path
name: response_id
required: true
schema:
type: string
example: resp_677efb5139a88190b512bc3fef8e535d
description: The ID of the response to delete.
responses:
"200":
description: OK
"404":
description: Not Found
content:
application/json:
schema:
$ref: "#/components/schemas/Error"
x-oaiMeta:
name: Delete a model response
group: responses
returns: |
A success message.
examples:
request:
curl: |
curl -X DELETE https://api.openai.com/v1/responses/resp_123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY"
javascript: |
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.del("resp_123");
console.log(response);
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.del("resp_123")
print(response)
response: |
{
"id": "resp_6786a1bec27481909a17d673315b29f6",
"object": "response",
"deleted": true
}
/responses/{response_id}/input_items:
get:
operationId: listInputItems
tags:
- Responses
summary: Returns a list of input items for a given response.
parameters:
- in: path
name: response_id
required: true
schema:
type: string
description: The ID of the response to retrieve input items for.
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between
1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- in: query
name: order
schema:
type: string
enum:
- asc
- desc
description: |
The order to return the input items in. Default is `asc`.
- `asc`: Return the input items in ascending order.
- `desc`: Return the input items in descending order.
- in: query
name: after
schema:
type: string
description: |
An item ID to list items after, used in pagination.
- in: query
name: before
schema:
type: string
description: |
An item ID to list items before, used in pagination.
- in: query
name: include
schema:
type: array
items:
$ref: "#/components/schemas/Includable"
description: |
Additional fields to include in the response. See the `include`
parameter for Response creation above for more information.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ResponseItemList"
x-oaiMeta:
name: List input items
group: responses
returns: A list of input item objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/responses/resp_abc123/input_items \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY"
javascript: >
import OpenAI from "openai";
const client = new OpenAI();
const response = await
client.responses.inputItems.list("resp_123");
console.log(response.data);
python: |
from openai import OpenAI
client = OpenAI()
response = client.responses.input_items.list("resp_123")
print(response.data)
response: >
{
"object": "list",
"data": [
{
"id": "msg_abc123",
"type": "message",
"role": "user",
"content": [
{
"type": "input_text",
"text": "Tell me a three sentence bedtime story about a unicorn."
}
]
}
],
"first_id": "msg_abc123",
"last_id": "msg_abc123",
"has_more": false
}
/threads:
post:
operationId: createThread
tags:
- Assistants
summary: Create a thread.
requestBody:
content:
application/json:
schema:
$ref: "#/components/schemas/CreateThreadRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ThreadObject"
x-oaiMeta:
name: Create thread
group: threads
beta: true
returns: A [thread](/docs/api-reference/threads) object.
examples:
- title: Empty
request:
curl: |
curl https://api.openai.com/v1/threads \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d ''
python: |
from openai import OpenAI
client = OpenAI()
empty_thread = client.beta.threads.create()
print(empty_thread)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const emptyThread = await openai.beta.threads.create();
console.log(emptyThread);
}
main();
response: |
{
"id": "thread_abc123",
"object": "thread",
"created_at": 1699012949,
"metadata": {},
"tool_resources": {}
}
- title: Messages
request:
curl: |
curl https://api.openai.com/v1/threads \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"messages": [{
"role": "user",
"content": "Hello, what is AI?"
}, {
"role": "user",
"content": "How does AI work? Explain it in simple terms."
}]
}'
python: |
from openai import OpenAI
client = OpenAI()
message_thread = client.beta.threads.create(
messages=[
{
"role": "user",
"content": "Hello, what is AI?"
},
{
"role": "user",
"content": "How does AI work? Explain it in simple terms."
},
]
)
print(message_thread)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const messageThread = await openai.beta.threads.create({
messages: [
{
role: "user",
content: "Hello, what is AI?"
},
{
role: "user",
content: "How does AI work? Explain it in simple terms.",
},
],
});
console.log(messageThread);
}
main();
response: |
{
"id": "thread_abc123",
"object": "thread",
"created_at": 1699014083,
"metadata": {},
"tool_resources": {}
}
/threads/runs:
post:
operationId: createThreadAndRun
tags:
- Assistants
summary: Create a thread and run it in one request.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateThreadAndRunRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunObject"
x-oaiMeta:
name: Create thread and run
group: threads
beta: true
returns: A [run](/docs/api-reference/runs/object) object.
examples:
- title: Default
request:
curl: >
curl https://api.openai.com/v1/threads/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_abc123",
"thread": {
"messages": [
{"role": "user", "content": "Explain deep learning to a 5 year old."}
]
}
}'
python: >
from openai import OpenAI
client = OpenAI()
run = client.beta.threads.create_and_run(
assistant_id="asst_abc123",
thread={
"messages": [
{"role": "user", "content": "Explain deep learning to a 5 year old."}
]
}
)
print(run)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const run = await openai.beta.threads.createAndRun({
assistant_id: "asst_abc123",
thread: {
messages: [
{ role: "user", content: "Explain deep learning to a 5 year old." },
],
},
});
console.log(run);
}
main();
response: |
{
"id": "run_abc123",
"object": "thread.run",
"created_at": 1699076792,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "queued",
"started_at": null,
"expires_at": 1699077392,
"cancelled_at": null,
"failed_at": null,
"completed_at": null,
"required_action": null,
"last_error": null,
"model": "gpt-4o",
"instructions": "You are a helpful assistant.",
"tools": [],
"tool_resources": {},
"metadata": {},
"temperature": 1.0,
"top_p": 1.0,
"max_completion_tokens": null,
"max_prompt_tokens": null,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"incomplete_details": null,
"usage": null,
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/threads/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_123",
"thread": {
"messages": [
{"role": "user", "content": "Hello"}
]
},
"stream": true
}'
python: |
from openai import OpenAI
client = OpenAI()
stream = client.beta.threads.create_and_run(
assistant_id="asst_123",
thread={
"messages": [
{"role": "user", "content": "Hello"}
]
},
stream=True
)
for event in stream:
print(event)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const stream = await openai.beta.threads.createAndRun({
assistant_id: "asst_123",
thread: {
messages: [
{ role: "user", content: "Hello" },
],
},
stream: true
});
for await (const event of stream) {
console.log(event);
}
}
main();
response: |
event: thread.created
data: {"id":"thread_123","object":"thread","created_at":1710348075,"metadata":{}}
event: thread.run.created
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710348675,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"tool_resources":{},"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}
event: thread.run.queued
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710348675,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"tool_resources":{},"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}
event: thread.run.in_progress
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"in_progress","started_at":null,"expires_at":1710348675,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"tool_resources":{},"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}
event: thread.run.step.created
data: {"id":"step_001","object":"thread.run.step","created_at":1710348076,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710348675,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":null}
event: thread.run.step.in_progress
data: {"id":"step_001","object":"thread.run.step","created_at":1710348076,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710348675,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":null}
event: thread.message.created
data: {"id":"msg_001","object":"thread.message","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[], "metadata":{}}
event: thread.message.in_progress
data: {"id":"msg_001","object":"thread.message","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[], "metadata":{}}
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"Hello","annotations":[]}}]}}
...
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":" today"}}]}}
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"?"}}]}}
event: thread.message.completed
data: {"id":"msg_001","object":"thread.message","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"completed","incomplete_details":null,"incomplete_at":null,"completed_at":1710348077,"role":"assistant","content":[{"type":"text","text":{"value":"Hello! How can I assist you today?","annotations":[]}}], "metadata":{}}
event: thread.run.step.completed
data: {"id":"step_001","object":"thread.run.step","created_at":1710348076,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"completed","cancelled_at":null,"completed_at":1710348077,"expires_at":1710348675,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":{"prompt_tokens":20,"completion_tokens":11,"total_tokens":31}}
event: thread.run.completed
{"id":"run_123","object":"thread.run","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","status":"completed","started_at":1713226836,"expires_at":null,"cancelled_at":null,"failed_at":null,"completed_at":1713226837,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":{"prompt_tokens":345,"completion_tokens":11,"total_tokens":356},"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}
event: done
data: [DONE]
- title: Streaming with Functions
request:
curl: >
curl https://api.openai.com/v1/threads/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_abc123",
"thread": {
"messages": [
{"role": "user", "content": "What is the weather like in San Francisco?"}
]
},
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"stream": true
}'
python: >
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
]
stream = client.beta.threads.create_and_run(
thread={
"messages": [
{"role": "user", "content": "What is the weather like in San Francisco?"}
]
},
assistant_id="asst_abc123",
tools=tools,
stream=True
)
for event in stream:
print(event)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
const tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
];
async function main() {
const stream = await openai.beta.threads.createAndRun({
assistant_id: "asst_123",
thread: {
messages: [
{ role: "user", content: "What is the weather like in San Francisco?" },
],
},
tools: tools,
stream: true
});
for await (const event of stream) {
console.log(event);
}
}
main();
response: |
event: thread.created
data: {"id":"thread_123","object":"thread","created_at":1710351818,"metadata":{}}
event: thread.run.created
data: {"id":"run_123","object":"thread.run","created_at":1710351818,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710352418,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.queued
data: {"id":"run_123","object":"thread.run","created_at":1710351818,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710352418,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.in_progress
data: {"id":"run_123","object":"thread.run","created_at":1710351818,"assistant_id":"asst_123","thread_id":"thread_123","status":"in_progress","started_at":1710351818,"expires_at":1710352418,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.step.created
data: {"id":"step_001","object":"thread.run.step","created_at":1710351819,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"tool_calls","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710352418,"failed_at":null,"last_error":null,"step_details":{"type":"tool_calls","tool_calls":[]},"usage":null}
event: thread.run.step.in_progress
data: {"id":"step_001","object":"thread.run.step","created_at":1710351819,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"tool_calls","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710352418,"failed_at":null,"last_error":null,"step_details":{"type":"tool_calls","tool_calls":[]},"usage":null}
event: thread.run.step.delta
data: {"id":"step_001","object":"thread.run.step.delta","delta":{"step_details":{"type":"tool_calls","tool_calls":[{"index":0,"id":"call_XXNp8YGaFrjrSjgqxtC8JJ1B","type":"function","function":{"name":"get_current_weather","arguments":"","output":null}}]}}}
event: thread.run.step.delta
data: {"id":"step_001","object":"thread.run.step.delta","delta":{"step_details":{"type":"tool_calls","tool_calls":[{"index":0,"type":"function","function":{"arguments":"{\""}}]}}}
event: thread.run.step.delta
data: {"id":"step_001","object":"thread.run.step.delta","delta":{"step_details":{"type":"tool_calls","tool_calls":[{"index":0,"type":"function","function":{"arguments":"location"}}]}}}
...
event: thread.run.step.delta
data: {"id":"step_001","object":"thread.run.step.delta","delta":{"step_details":{"type":"tool_calls","tool_calls":[{"index":0,"type":"function","function":{"arguments":"ahrenheit"}}]}}}
event: thread.run.step.delta
data: {"id":"step_001","object":"thread.run.step.delta","delta":{"step_details":{"type":"tool_calls","tool_calls":[{"index":0,"type":"function","function":{"arguments":"\"}"}}]}}}
event: thread.run.requires_action
data: {"id":"run_123","object":"thread.run","created_at":1710351818,"assistant_id":"asst_123","thread_id":"thread_123","status":"requires_action","started_at":1710351818,"expires_at":1710352418,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":{"type":"submit_tool_outputs","submit_tool_outputs":{"tool_calls":[{"id":"call_XXNp8YGaFrjrSjgqxtC8JJ1B","type":"function","function":{"name":"get_current_weather","arguments":"{\"location\":\"San Francisco, CA\",\"unit\":\"fahrenheit\"}"}}]}},"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":{"prompt_tokens":345,"completion_tokens":11,"total_tokens":356},"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: done
data: [DONE]
/threads/{thread_id}:
get:
operationId: getThread
tags:
- Assistants
summary: Retrieves a thread.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to retrieve.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ThreadObject"
x-oaiMeta:
name: Retrieve thread
group: threads
beta: true
returns: The [thread](/docs/api-reference/threads/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
my_thread = client.beta.threads.retrieve("thread_abc123")
print(my_thread)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myThread = await openai.beta.threads.retrieve(
"thread_abc123"
);
console.log(myThread);
}
main();
response: |
{
"id": "thread_abc123",
"object": "thread",
"created_at": 1699014083,
"metadata": {},
"tool_resources": {
"code_interpreter": {
"file_ids": []
}
}
}
post:
operationId: modifyThread
tags:
- Assistants
summary: Modifies a thread.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to modify. Only the `metadata` can be modified.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ModifyThreadRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ThreadObject"
x-oaiMeta:
name: Modify thread
group: threads
beta: true
returns: The modified [thread](/docs/api-reference/threads/object) object
matching the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"metadata": {
"modified": "true",
"user": "abc123"
}
}'
python: |
from openai import OpenAI
client = OpenAI()
my_updated_thread = client.beta.threads.update(
"thread_abc123",
metadata={
"modified": "true",
"user": "abc123"
}
)
print(my_updated_thread)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const updatedThread = await openai.beta.threads.update(
"thread_abc123",
{
metadata: { modified: "true", user: "abc123" },
}
);
console.log(updatedThread);
}
main();
response: |
{
"id": "thread_abc123",
"object": "thread",
"created_at": 1699014083,
"metadata": {
"modified": "true",
"user": "abc123"
},
"tool_resources": {}
}
delete:
operationId: deleteThread
tags:
- Assistants
summary: Delete a thread.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to delete.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteThreadResponse"
x-oaiMeta:
name: Delete thread
group: threads
beta: true
returns: Deletion status
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-X DELETE
python: |
from openai import OpenAI
client = OpenAI()
response = client.beta.threads.delete("thread_abc123")
print(response)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const response = await openai.beta.threads.del("thread_abc123");
console.log(response);
}
main();
response: |
{
"id": "thread_abc123",
"object": "thread.deleted",
"deleted": true
}
/threads/{thread_id}/messages:
get:
operationId: listMessages
tags:
- Assistants
summary: Returns a list of messages for a given thread.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the [thread](/docs/api-reference/threads) the messages
belong to.
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
- name: run_id
in: query
description: |
Filter messages by the run ID that generated them.
schema:
type: string
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListMessagesResponse"
x-oaiMeta:
name: List messages
group: threads
beta: true
returns: A list of [message](/docs/api-reference/messages) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: >
from openai import OpenAI
client = OpenAI()
thread_messages =
client.beta.threads.messages.list("thread_abc123")
print(thread_messages.data)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const threadMessages = await openai.beta.threads.messages.list(
"thread_abc123"
);
console.log(threadMessages.data);
}
main();
response: >
{
"object": "list",
"data": [
{
"id": "msg_abc123",
"object": "thread.message",
"created_at": 1699016383,
"assistant_id": null,
"thread_id": "thread_abc123",
"run_id": null,
"role": "user",
"content": [
{
"type": "text",
"text": {
"value": "How does AI work? Explain it in simple terms.",
"annotations": []
}
}
],
"attachments": [],
"metadata": {}
},
{
"id": "msg_abc456",
"object": "thread.message",
"created_at": 1699016383,
"assistant_id": null,
"thread_id": "thread_abc123",
"run_id": null,
"role": "user",
"content": [
{
"type": "text",
"text": {
"value": "Hello, what is AI?",
"annotations": []
}
}
],
"attachments": [],
"metadata": {}
}
],
"first_id": "msg_abc123",
"last_id": "msg_abc456",
"has_more": false
}
post:
operationId: createMessage
tags:
- Assistants
summary: Create a message.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the [thread](/docs/api-reference/threads) to create a
message for.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateMessageRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/MessageObject"
x-oaiMeta:
name: Create message
group: threads
beta: true
returns: A [message](/docs/api-reference/messages/object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/messages \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"role": "user",
"content": "How does AI work? Explain it in simple terms."
}'
python: |
from openai import OpenAI
client = OpenAI()
thread_message = client.beta.threads.messages.create(
"thread_abc123",
role="user",
content="How does AI work? Explain it in simple terms.",
)
print(thread_message)
node.js: >-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const threadMessages = await openai.beta.threads.messages.create(
"thread_abc123",
{ role: "user", content: "How does AI work? Explain it in simple terms." }
);
console.log(threadMessages);
}
main();
response: |
{
"id": "msg_abc123",
"object": "thread.message",
"created_at": 1713226573,
"assistant_id": null,
"thread_id": "thread_abc123",
"run_id": null,
"role": "user",
"content": [
{
"type": "text",
"text": {
"value": "How does AI work? Explain it in simple terms.",
"annotations": []
}
}
],
"attachments": [],
"metadata": {}
}
/threads/{thread_id}/messages/{message_id}:
get:
operationId: getMessage
tags:
- Assistants
summary: Retrieve a message.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the [thread](/docs/api-reference/threads) to which this
message belongs.
- in: path
name: message_id
required: true
schema:
type: string
description: The ID of the message to retrieve.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/MessageObject"
x-oaiMeta:
name: Retrieve message
group: threads
beta: true
returns: The [message](/docs/api-reference/messages/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/messages/msg_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
message = client.beta.threads.messages.retrieve(
message_id="msg_abc123",
thread_id="thread_abc123",
)
print(message)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const message = await openai.beta.threads.messages.retrieve(
"thread_abc123",
"msg_abc123"
);
console.log(message);
}
main();
response: |
{
"id": "msg_abc123",
"object": "thread.message",
"created_at": 1699017614,
"assistant_id": null,
"thread_id": "thread_abc123",
"run_id": null,
"role": "user",
"content": [
{
"type": "text",
"text": {
"value": "How does AI work? Explain it in simple terms.",
"annotations": []
}
}
],
"attachments": [],
"metadata": {}
}
post:
operationId: modifyMessage
tags:
- Assistants
summary: Modifies a message.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to which this message belongs.
- in: path
name: message_id
required: true
schema:
type: string
description: The ID of the message to modify.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ModifyMessageRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/MessageObject"
x-oaiMeta:
name: Modify message
group: threads
beta: true
returns: The modified [message](/docs/api-reference/messages/object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/messages/msg_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"metadata": {
"modified": "true",
"user": "abc123"
}
}'
python: |
from openai import OpenAI
client = OpenAI()
message = client.beta.threads.messages.update(
message_id="msg_abc12",
thread_id="thread_abc123",
metadata={
"modified": "true",
"user": "abc123",
},
)
print(message)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const message = await openai.beta.threads.messages.update(
"thread_abc123",
"msg_abc123",
{
metadata: {
modified: "true",
user: "abc123",
},
}
}'
response: |
{
"id": "msg_abc123",
"object": "thread.message",
"created_at": 1699017614,
"assistant_id": null,
"thread_id": "thread_abc123",
"run_id": null,
"role": "user",
"content": [
{
"type": "text",
"text": {
"value": "How does AI work? Explain it in simple terms.",
"annotations": []
}
}
],
"file_ids": [],
"metadata": {
"modified": "true",
"user": "abc123"
}
}
delete:
operationId: deleteMessage
tags:
- Assistants
summary: Deletes a message.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to which this message belongs.
- in: path
name: message_id
required: true
schema:
type: string
description: The ID of the message to delete.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteMessageResponse"
x-oaiMeta:
name: Delete message
group: threads
beta: true
returns: Deletion status
examples:
request:
curl: |
curl -X DELETE https://api.openai.com/v1/threads/thread_abc123/messages/msg_abc123 \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
deleted_message = client.beta.threads.messages.delete(
message_id="msg_abc12",
thread_id="thread_abc123",
)
print(deleted_message)
node.js: |-
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const deletedMessage = await openai.beta.threads.messages.del(
"thread_abc123",
"msg_abc123"
);
console.log(deletedMessage);
}
response: |
{
"id": "msg_abc123",
"object": "thread.message.deleted",
"deleted": true
}
/threads/{thread_id}/runs:
get:
operationId: listRuns
tags:
- Assistants
summary: Returns a list of runs belonging to a thread.
parameters:
- name: thread_id
in: path
required: true
schema:
type: string
description: The ID of the thread the run belongs to.
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListRunsResponse"
x-oaiMeta:
name: List runs
group: threads
beta: true
returns: A list of [run](/docs/api-reference/runs/object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
runs = client.beta.threads.runs.list(
"thread_abc123"
)
print(runs)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const runs = await openai.beta.threads.runs.list(
"thread_abc123"
);
console.log(runs);
}
main();
response: |
{
"object": "list",
"data": [
{
"id": "run_abc123",
"object": "thread.run",
"created_at": 1699075072,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "completed",
"started_at": 1699075072,
"expires_at": null,
"cancelled_at": null,
"failed_at": null,
"completed_at": 1699075073,
"last_error": null,
"model": "gpt-4o",
"instructions": null,
"incomplete_details": null,
"tools": [
{
"type": "code_interpreter"
}
],
"tool_resources": {
"code_interpreter": {
"file_ids": [
"file-abc123",
"file-abc456"
]
}
},
"metadata": {},
"usage": {
"prompt_tokens": 123,
"completion_tokens": 456,
"total_tokens": 579
},
"temperature": 1.0,
"top_p": 1.0,
"max_prompt_tokens": 1000,
"max_completion_tokens": 1000,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
},
{
"id": "run_abc456",
"object": "thread.run",
"created_at": 1699063290,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "completed",
"started_at": 1699063290,
"expires_at": null,
"cancelled_at": null,
"failed_at": null,
"completed_at": 1699063291,
"last_error": null,
"model": "gpt-4o",
"instructions": null,
"incomplete_details": null,
"tools": [
{
"type": "code_interpreter"
}
],
"tool_resources": {
"code_interpreter": {
"file_ids": [
"file-abc123",
"file-abc456"
]
}
},
"metadata": {},
"usage": {
"prompt_tokens": 123,
"completion_tokens": 456,
"total_tokens": 579
},
"temperature": 1.0,
"top_p": 1.0,
"max_prompt_tokens": 1000,
"max_completion_tokens": 1000,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
],
"first_id": "run_abc123",
"last_id": "run_abc456",
"has_more": false
}
post:
operationId: createRun
tags:
- Assistants
summary: Create a run.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to run.
- name: include[]
in: query
description: |
A list of additional fields to include in the response. Currently the only supported value is `step_details.tool_calls[*].file_search.results[*].content` to fetch the file search result content.
See the [file search tool documentation](/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
schema:
type: array
items:
type: string
enum:
- step_details.tool_calls[*].file_search.results[*].content
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateRunRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunObject"
x-oaiMeta:
name: Create run
group: threads
beta: true
returns: A [run](/docs/api-reference/runs/object) object.
examples:
- title: Default
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_abc123"
}'
python: |
from openai import OpenAI
client = OpenAI()
run = client.beta.threads.runs.create(
thread_id="thread_abc123",
assistant_id="asst_abc123"
)
print(run)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const run = await openai.beta.threads.runs.create(
"thread_abc123",
{ assistant_id: "asst_abc123" }
);
console.log(run);
}
main();
response: |
{
"id": "run_abc123",
"object": "thread.run",
"created_at": 1699063290,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "queued",
"started_at": 1699063290,
"expires_at": null,
"cancelled_at": null,
"failed_at": null,
"completed_at": 1699063291,
"last_error": null,
"model": "gpt-4o",
"instructions": null,
"incomplete_details": null,
"tools": [
{
"type": "code_interpreter"
}
],
"metadata": {},
"usage": null,
"temperature": 1.0,
"top_p": 1.0,
"max_prompt_tokens": 1000,
"max_completion_tokens": 1000,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/threads/thread_123/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_123",
"stream": true
}'
python: |
from openai import OpenAI
client = OpenAI()
stream = client.beta.threads.runs.create(
thread_id="thread_123",
assistant_id="asst_123",
stream=True
)
for event in stream:
print(event)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const stream = await openai.beta.threads.runs.create(
"thread_123",
{ assistant_id: "asst_123", stream: true }
);
for await (const event of stream) {
console.log(event);
}
}
main();
response: |
event: thread.run.created
data: {"id":"run_123","object":"thread.run","created_at":1710330640,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710331240,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.queued
data: {"id":"run_123","object":"thread.run","created_at":1710330640,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710331240,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.in_progress
data: {"id":"run_123","object":"thread.run","created_at":1710330640,"assistant_id":"asst_123","thread_id":"thread_123","status":"in_progress","started_at":1710330641,"expires_at":1710331240,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.step.created
data: {"id":"step_001","object":"thread.run.step","created_at":1710330641,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710331240,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":null}
event: thread.run.step.in_progress
data: {"id":"step_001","object":"thread.run.step","created_at":1710330641,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710331240,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":null}
event: thread.message.created
data: {"id":"msg_001","object":"thread.message","created_at":1710330641,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[],"metadata":{}}
event: thread.message.in_progress
data: {"id":"msg_001","object":"thread.message","created_at":1710330641,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[],"metadata":{}}
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"Hello","annotations":[]}}]}}
...
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":" today"}}]}}
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"?"}}]}}
event: thread.message.completed
data: {"id":"msg_001","object":"thread.message","created_at":1710330641,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"completed","incomplete_details":null,"incomplete_at":null,"completed_at":1710330642,"role":"assistant","content":[{"type":"text","text":{"value":"Hello! How can I assist you today?","annotations":[]}}],"metadata":{}}
event: thread.run.step.completed
data: {"id":"step_001","object":"thread.run.step","created_at":1710330641,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"completed","cancelled_at":null,"completed_at":1710330642,"expires_at":1710331240,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":{"prompt_tokens":20,"completion_tokens":11,"total_tokens":31}}
event: thread.run.completed
data: {"id":"run_123","object":"thread.run","created_at":1710330640,"assistant_id":"asst_123","thread_id":"thread_123","status":"completed","started_at":1710330641,"expires_at":null,"cancelled_at":null,"failed_at":null,"completed_at":1710330642,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":{"prompt_tokens":20,"completion_tokens":11,"total_tokens":31},"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: done
data: [DONE]
- title: Streaming with Functions
request:
curl: >
curl https://api.openai.com/v1/threads/thread_abc123/runs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"assistant_id": "asst_abc123",
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"stream": true
}'
python: >
from openai import OpenAI
client = OpenAI()
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
]
stream = client.beta.threads.runs.create(
thread_id="thread_abc123",
assistant_id="asst_abc123",
tools=tools,
stream=True
)
for event in stream:
print(event)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
const tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
}
];
async function main() {
const stream = await openai.beta.threads.runs.create(
"thread_abc123",
{
assistant_id: "asst_abc123",
tools: tools,
stream: true
}
);
for await (const event of stream) {
console.log(event);
}
}
main();
response: |
event: thread.run.created
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710348675,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.queued
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":null,"expires_at":1710348675,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.in_progress
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"in_progress","started_at":1710348075,"expires_at":1710348675,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.step.created
data: {"id":"step_001","object":"thread.run.step","created_at":1710348076,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710348675,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":null}
event: thread.run.step.in_progress
data: {"id":"step_001","object":"thread.run.step","created_at":1710348076,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710348675,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":null}
event: thread.message.created
data: {"id":"msg_001","object":"thread.message","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[],"metadata":{}}
event: thread.message.in_progress
data: {"id":"msg_001","object":"thread.message","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[],"metadata":{}}
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"Hello","annotations":[]}}]}}
...
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":" today"}}]}}
event: thread.message.delta
data: {"id":"msg_001","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"?"}}]}}
event: thread.message.completed
data: {"id":"msg_001","object":"thread.message","created_at":1710348076,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"completed","incomplete_details":null,"incomplete_at":null,"completed_at":1710348077,"role":"assistant","content":[{"type":"text","text":{"value":"Hello! How can I assist you today?","annotations":[]}}],"metadata":{}}
event: thread.run.step.completed
data: {"id":"step_001","object":"thread.run.step","created_at":1710348076,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"completed","cancelled_at":null,"completed_at":1710348077,"expires_at":1710348675,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_001"}},"usage":{"prompt_tokens":20,"completion_tokens":11,"total_tokens":31}}
event: thread.run.completed
data: {"id":"run_123","object":"thread.run","created_at":1710348075,"assistant_id":"asst_123","thread_id":"thread_123","status":"completed","started_at":1710348075,"expires_at":null,"cancelled_at":null,"failed_at":null,"completed_at":1710348077,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":{"prompt_tokens":20,"completion_tokens":11,"total_tokens":31},"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: done
data: [DONE]
/threads/{thread_id}/runs/{run_id}:
get:
operationId: getRun
tags:
- Assistants
summary: Retrieves a run.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the [thread](/docs/api-reference/threads) that was run.
- in: path
name: run_id
required: true
schema:
type: string
description: The ID of the run to retrieve.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunObject"
x-oaiMeta:
name: Retrieve run
group: threads
beta: true
returns: The [run](/docs/api-reference/runs/object) object matching the
specified ID.
examples:
request:
curl: >
curl
https://api.openai.com/v1/threads/thread_abc123/runs/run_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
run = client.beta.threads.runs.retrieve(
thread_id="thread_abc123",
run_id="run_abc123"
)
print(run)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const run = await openai.beta.threads.runs.retrieve(
"thread_abc123",
"run_abc123"
);
console.log(run);
}
main();
response: |
{
"id": "run_abc123",
"object": "thread.run",
"created_at": 1699075072,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "completed",
"started_at": 1699075072,
"expires_at": null,
"cancelled_at": null,
"failed_at": null,
"completed_at": 1699075073,
"last_error": null,
"model": "gpt-4o",
"instructions": null,
"incomplete_details": null,
"tools": [
{
"type": "code_interpreter"
}
],
"metadata": {},
"usage": {
"prompt_tokens": 123,
"completion_tokens": 456,
"total_tokens": 579
},
"temperature": 1.0,
"top_p": 1.0,
"max_prompt_tokens": 1000,
"max_completion_tokens": 1000,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
post:
operationId: modifyRun
tags:
- Assistants
summary: Modifies a run.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the [thread](/docs/api-reference/threads) that was run.
- in: path
name: run_id
required: true
schema:
type: string
description: The ID of the run to modify.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/ModifyRunRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunObject"
x-oaiMeta:
name: Modify run
group: threads
beta: true
returns: The modified [run](/docs/api-reference/runs/object) object matching the
specified ID.
examples:
request:
curl: >
curl
https://api.openai.com/v1/threads/thread_abc123/runs/run_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"metadata": {
"user_id": "user_abc123"
}
}'
python: |
from openai import OpenAI
client = OpenAI()
run = client.beta.threads.runs.update(
thread_id="thread_abc123",
run_id="run_abc123",
metadata={"user_id": "user_abc123"},
)
print(run)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const run = await openai.beta.threads.runs.update(
"thread_abc123",
"run_abc123",
{
metadata: {
user_id: "user_abc123",
},
}
);
console.log(run);
}
main();
response: |
{
"id": "run_abc123",
"object": "thread.run",
"created_at": 1699075072,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "completed",
"started_at": 1699075072,
"expires_at": null,
"cancelled_at": null,
"failed_at": null,
"completed_at": 1699075073,
"last_error": null,
"model": "gpt-4o",
"instructions": null,
"incomplete_details": null,
"tools": [
{
"type": "code_interpreter"
}
],
"tool_resources": {
"code_interpreter": {
"file_ids": [
"file-abc123",
"file-abc456"
]
}
},
"metadata": {
"user_id": "user_abc123"
},
"usage": {
"prompt_tokens": 123,
"completion_tokens": 456,
"total_tokens": 579
},
"temperature": 1.0,
"top_p": 1.0,
"max_prompt_tokens": 1000,
"max_completion_tokens": 1000,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
/threads/{thread_id}/runs/{run_id}/cancel:
post:
operationId: cancelRun
tags:
- Assistants
summary: Cancels a run that is `in_progress`.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to which this run belongs.
- in: path
name: run_id
required: true
schema:
type: string
description: The ID of the run to cancel.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunObject"
x-oaiMeta:
name: Cancel a run
group: threads
beta: true
returns: The modified [run](/docs/api-reference/runs/object) object matching the
specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/runs/run_abc123/cancel \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "OpenAI-Beta: assistants=v2" \
-X POST
python: |
from openai import OpenAI
client = OpenAI()
run = client.beta.threads.runs.cancel(
thread_id="thread_abc123",
run_id="run_abc123"
)
print(run)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const run = await openai.beta.threads.runs.cancel(
"thread_abc123",
"run_abc123"
);
console.log(run);
}
main();
response: |
{
"id": "run_abc123",
"object": "thread.run",
"created_at": 1699076126,
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"status": "cancelling",
"started_at": 1699076126,
"expires_at": 1699076726,
"cancelled_at": null,
"failed_at": null,
"completed_at": null,
"last_error": null,
"model": "gpt-4o",
"instructions": "You summarize books.",
"tools": [
{
"type": "file_search"
}
],
"tool_resources": {
"file_search": {
"vector_store_ids": ["vs_123"]
}
},
"metadata": {},
"usage": null,
"temperature": 1.0,
"top_p": 1.0,
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
/threads/{thread_id}/runs/{run_id}/steps:
get:
operationId: listRunSteps
tags:
- Assistants
summary: Returns a list of run steps belonging to a run.
parameters:
- name: thread_id
in: path
required: true
schema:
type: string
description: The ID of the thread the run and run steps belong to.
- name: run_id
in: path
required: true
schema:
type: string
description: The ID of the run the run steps belong to.
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
- name: include[]
in: query
description: |
A list of additional fields to include in the response. Currently the only supported value is `step_details.tool_calls[*].file_search.results[*].content` to fetch the file search result content.
See the [file search tool documentation](/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
schema:
type: array
items:
type: string
enum:
- step_details.tool_calls[*].file_search.results[*].content
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListRunStepsResponse"
x-oaiMeta:
name: List run steps
group: threads
beta: true
returns: A list of [run step](/docs/api-reference/run-steps/step-object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/runs/run_abc123/steps \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
run_steps = client.beta.threads.runs.steps.list(
thread_id="thread_abc123",
run_id="run_abc123"
)
print(run_steps)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const runStep = await openai.beta.threads.runs.steps.list(
"thread_abc123",
"run_abc123"
);
console.log(runStep);
}
main();
response: |
{
"object": "list",
"data": [
{
"id": "step_abc123",
"object": "thread.run.step",
"created_at": 1699063291,
"run_id": "run_abc123",
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"type": "message_creation",
"status": "completed",
"cancelled_at": null,
"completed_at": 1699063291,
"expired_at": null,
"failed_at": null,
"last_error": null,
"step_details": {
"type": "message_creation",
"message_creation": {
"message_id": "msg_abc123"
}
},
"usage": {
"prompt_tokens": 123,
"completion_tokens": 456,
"total_tokens": 579
}
}
],
"first_id": "step_abc123",
"last_id": "step_abc456",
"has_more": false
}
/threads/{thread_id}/runs/{run_id}/steps/{step_id}:
get:
operationId: getRunStep
tags:
- Assistants
summary: Retrieves a run step.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the thread to which the run and run step belongs.
- in: path
name: run_id
required: true
schema:
type: string
description: The ID of the run to which the run step belongs.
- in: path
name: step_id
required: true
schema:
type: string
description: The ID of the run step to retrieve.
- name: include[]
in: query
description: |
A list of additional fields to include in the response. Currently the only supported value is `step_details.tool_calls[*].file_search.results[*].content` to fetch the file search result content.
See the [file search tool documentation](/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
schema:
type: array
items:
type: string
enum:
- step_details.tool_calls[*].file_search.results[*].content
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunStepObject"
x-oaiMeta:
name: Retrieve run step
group: threads
beta: true
returns: The [run step](/docs/api-reference/run-steps/step-object) object
matching the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/threads/thread_abc123/runs/run_abc123/steps/step_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
run_step = client.beta.threads.runs.steps.retrieve(
thread_id="thread_abc123",
run_id="run_abc123",
step_id="step_abc123"
)
print(run_step)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const runStep = await openai.beta.threads.runs.steps.retrieve(
"thread_abc123",
"run_abc123",
"step_abc123"
);
console.log(runStep);
}
main();
response: |
{
"id": "step_abc123",
"object": "thread.run.step",
"created_at": 1699063291,
"run_id": "run_abc123",
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123",
"type": "message_creation",
"status": "completed",
"cancelled_at": null,
"completed_at": 1699063291,
"expired_at": null,
"failed_at": null,
"last_error": null,
"step_details": {
"type": "message_creation",
"message_creation": {
"message_id": "msg_abc123"
}
},
"usage": {
"prompt_tokens": 123,
"completion_tokens": 456,
"total_tokens": 579
}
}
/threads/{thread_id}/runs/{run_id}/submit_tool_outputs:
post:
operationId: submitToolOuputsToRun
tags:
- Assistants
summary: >
When a run has the `status: "requires_action"` and
`required_action.type` is `submit_tool_outputs`, this endpoint can be
used to submit the outputs from the tool calls once they're all
completed. All outputs must be submitted in a single request.
parameters:
- in: path
name: thread_id
required: true
schema:
type: string
description: The ID of the [thread](/docs/api-reference/threads) to which this
run belongs.
- in: path
name: run_id
required: true
schema:
type: string
description: The ID of the run that requires the tool output submission.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/SubmitToolOutputsRunRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/RunObject"
x-oaiMeta:
name: Submit tool outputs to run
group: threads
beta: true
returns: The modified [run](/docs/api-reference/runs/object) object matching the
specified ID.
examples:
- title: Default
request:
curl: |
curl https://api.openai.com/v1/threads/thread_123/runs/run_123/submit_tool_outputs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"tool_outputs": [
{
"tool_call_id": "call_001",
"output": "70 degrees and sunny."
}
]
}'
python: |
from openai import OpenAI
client = OpenAI()
run = client.beta.threads.runs.submit_tool_outputs(
thread_id="thread_123",
run_id="run_123",
tool_outputs=[
{
"tool_call_id": "call_001",
"output": "70 degrees and sunny."
}
]
)
print(run)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const run = await openai.beta.threads.runs.submitToolOutputs(
"thread_123",
"run_123",
{
tool_outputs: [
{
tool_call_id: "call_001",
output: "70 degrees and sunny.",
},
],
}
);
console.log(run);
}
main();
response: >
{
"id": "run_123",
"object": "thread.run",
"created_at": 1699075592,
"assistant_id": "asst_123",
"thread_id": "thread_123",
"status": "queued",
"started_at": 1699075592,
"expires_at": 1699076192,
"cancelled_at": null,
"failed_at": null,
"completed_at": null,
"last_error": null,
"model": "gpt-4o",
"instructions": null,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA"
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"]
}
},
"required": ["location"]
}
}
}
],
"metadata": {},
"usage": null,
"temperature": 1.0,
"top_p": 1.0,
"max_prompt_tokens": 1000,
"max_completion_tokens": 1000,
"truncation_strategy": {
"type": "auto",
"last_messages": null
},
"response_format": "auto",
"tool_choice": "auto",
"parallel_tool_calls": true
}
- title: Streaming
request:
curl: |
curl https://api.openai.com/v1/threads/thread_123/runs/run_123/submit_tool_outputs \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"tool_outputs": [
{
"tool_call_id": "call_001",
"output": "70 degrees and sunny."
}
],
"stream": true
}'
python: |
from openai import OpenAI
client = OpenAI()
stream = client.beta.threads.runs.submit_tool_outputs(
thread_id="thread_123",
run_id="run_123",
tool_outputs=[
{
"tool_call_id": "call_001",
"output": "70 degrees and sunny."
}
],
stream=True
)
for event in stream:
print(event)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const stream = await openai.beta.threads.runs.submitToolOutputs(
"thread_123",
"run_123",
{
tool_outputs: [
{
tool_call_id: "call_001",
output: "70 degrees and sunny.",
},
],
}
);
for await (const event of stream) {
console.log(event);
}
}
main();
response: |
event: thread.run.step.completed
data: {"id":"step_001","object":"thread.run.step","created_at":1710352449,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"tool_calls","status":"completed","cancelled_at":null,"completed_at":1710352475,"expires_at":1710353047,"failed_at":null,"last_error":null,"step_details":{"type":"tool_calls","tool_calls":[{"id":"call_iWr0kQ2EaYMaxNdl0v3KYkx7","type":"function","function":{"name":"get_current_weather","arguments":"{\"location\":\"San Francisco, CA\",\"unit\":\"fahrenheit\"}","output":"70 degrees and sunny."}}]},"usage":{"prompt_tokens":291,"completion_tokens":24,"total_tokens":315}}
event: thread.run.queued
data: {"id":"run_123","object":"thread.run","created_at":1710352447,"assistant_id":"asst_123","thread_id":"thread_123","status":"queued","started_at":1710352448,"expires_at":1710353047,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.in_progress
data: {"id":"run_123","object":"thread.run","created_at":1710352447,"assistant_id":"asst_123","thread_id":"thread_123","status":"in_progress","started_at":1710352475,"expires_at":1710353047,"cancelled_at":null,"failed_at":null,"completed_at":null,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":null,"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: thread.run.step.created
data: {"id":"step_002","object":"thread.run.step","created_at":1710352476,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710353047,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_002"}},"usage":null}
event: thread.run.step.in_progress
data: {"id":"step_002","object":"thread.run.step","created_at":1710352476,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"in_progress","cancelled_at":null,"completed_at":null,"expires_at":1710353047,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_002"}},"usage":null}
event: thread.message.created
data: {"id":"msg_002","object":"thread.message","created_at":1710352476,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[],"metadata":{}}
event: thread.message.in_progress
data: {"id":"msg_002","object":"thread.message","created_at":1710352476,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"in_progress","incomplete_details":null,"incomplete_at":null,"completed_at":null,"role":"assistant","content":[],"metadata":{}}
event: thread.message.delta
data: {"id":"msg_002","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"The","annotations":[]}}]}}
event: thread.message.delta
data: {"id":"msg_002","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":" current"}}]}}
event: thread.message.delta
data: {"id":"msg_002","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":" weather"}}]}}
...
event: thread.message.delta
data: {"id":"msg_002","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":" sunny"}}]}}
event: thread.message.delta
data: {"id":"msg_002","object":"thread.message.delta","delta":{"content":[{"index":0,"type":"text","text":{"value":"."}}]}}
event: thread.message.completed
data: {"id":"msg_002","object":"thread.message","created_at":1710352476,"assistant_id":"asst_123","thread_id":"thread_123","run_id":"run_123","status":"completed","incomplete_details":null,"incomplete_at":null,"completed_at":1710352477,"role":"assistant","content":[{"type":"text","text":{"value":"The current weather in San Francisco, CA is 70 degrees Fahrenheit and sunny.","annotations":[]}}],"metadata":{}}
event: thread.run.step.completed
data: {"id":"step_002","object":"thread.run.step","created_at":1710352476,"run_id":"run_123","assistant_id":"asst_123","thread_id":"thread_123","type":"message_creation","status":"completed","cancelled_at":null,"completed_at":1710352477,"expires_at":1710353047,"failed_at":null,"last_error":null,"step_details":{"type":"message_creation","message_creation":{"message_id":"msg_002"}},"usage":{"prompt_tokens":329,"completion_tokens":18,"total_tokens":347}}
event: thread.run.completed
data: {"id":"run_123","object":"thread.run","created_at":1710352447,"assistant_id":"asst_123","thread_id":"thread_123","status":"completed","started_at":1710352475,"expires_at":null,"cancelled_at":null,"failed_at":null,"completed_at":1710352477,"required_action":null,"last_error":null,"model":"gpt-4o","instructions":null,"tools":[{"type":"function","function":{"name":"get_current_weather","description":"Get the current weather in a given location","parameters":{"type":"object","properties":{"location":{"type":"string","description":"The city and state, e.g. San Francisco, CA"},"unit":{"type":"string","enum":["celsius","fahrenheit"]}},"required":["location"]}}}],"metadata":{},"temperature":1.0,"top_p":1.0,"max_completion_tokens":null,"max_prompt_tokens":null,"truncation_strategy":{"type":"auto","last_messages":null},"incomplete_details":null,"usage":{"prompt_tokens":20,"completion_tokens":11,"total_tokens":31},"response_format":"auto","tool_choice":"auto","parallel_tool_calls":true}}
event: done
data: [DONE]
/uploads:
post:
operationId: createUpload
tags:
- Uploads
summary: >
Creates an intermediate [Upload](/docs/api-reference/uploads/object)
object
that you can add [Parts](/docs/api-reference/uploads/part-object) to.
Currently, an Upload can accept at most 8 GB in total and expires after
an
hour after you create it.
Once you complete the Upload, we will create a
[File](/docs/api-reference/files/object) object that contains all the
parts
you uploaded. This File is usable in the rest of our platform as a
regular
File object.
For certain `purpose` values, the correct `mime_type` must be
specified.
Please refer to documentation for the
[supported MIME types for your use
case](/docs/assistants/tools/file-search#supported-files).
For guidance on the proper filename extensions for each purpose, please
follow the documentation on [creating a
File](/docs/api-reference/files/create).
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateUploadRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Upload"
x-oaiMeta:
name: Create upload
group: uploads
returns: The [Upload](/docs/api-reference/uploads/object) object with status
`pending`.
examples:
request:
curl: |
curl https://api.openai.com/v1/uploads \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{
"purpose": "fine-tune",
"filename": "training_examples.jsonl",
"bytes": 2147483648,
"mime_type": "text/jsonl"
}'
response: |
{
"id": "upload_abc123",
"object": "upload",
"bytes": 2147483648,
"created_at": 1719184911,
"filename": "training_examples.jsonl",
"purpose": "fine-tune",
"status": "pending",
"expires_at": 1719127296
}
/uploads/{upload_id}/cancel:
post:
operationId: cancelUpload
tags:
- Uploads
summary: |
Cancels the Upload. No Parts may be added after an Upload is cancelled.
parameters:
- in: path
name: upload_id
required: true
schema:
type: string
example: upload_abc123
description: |
The ID of the Upload.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Upload"
x-oaiMeta:
name: Cancel upload
group: uploads
returns: The [Upload](/docs/api-reference/uploads/object) object with status
`cancelled`.
examples:
request:
curl: |
curl https://api.openai.com/v1/uploads/upload_abc123/cancel
response: |
{
"id": "upload_abc123",
"object": "upload",
"bytes": 2147483648,
"created_at": 1719184911,
"filename": "training_examples.jsonl",
"purpose": "fine-tune",
"status": "cancelled",
"expires_at": 1719127296
}
/uploads/{upload_id}/complete:
post:
operationId: completeUpload
tags:
- Uploads
summary: >
Completes the [Upload](/docs/api-reference/uploads/object).
Within the returned Upload object, there is a nested
[File](/docs/api-reference/files/object) object that is ready to use in
the rest of the platform.
You can specify the order of the Parts by passing in an ordered list of
the Part IDs.
The number of bytes uploaded upon completion must match the number of
bytes initially specified when creating the Upload object. No Parts may
be added after an Upload is completed.
parameters:
- in: path
name: upload_id
required: true
schema:
type: string
example: upload_abc123
description: |
The ID of the Upload.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CompleteUploadRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/Upload"
x-oaiMeta:
name: Complete upload
group: uploads
returns: The [Upload](/docs/api-reference/uploads/object) object with status
`completed` with an additional `file` property containing the created
usable File object.
examples:
request:
curl: |
curl https://api.openai.com/v1/uploads/upload_abc123/complete
-d '{
"part_ids": ["part_def456", "part_ghi789"]
}'
response: |
{
"id": "upload_abc123",
"object": "upload",
"bytes": 2147483648,
"created_at": 1719184911,
"filename": "training_examples.jsonl",
"purpose": "fine-tune",
"status": "completed",
"expires_at": 1719127296,
"file": {
"id": "file-xyz321",
"object": "file",
"bytes": 2147483648,
"created_at": 1719186911,
"filename": "training_examples.jsonl",
"purpose": "fine-tune",
}
}
/uploads/{upload_id}/parts:
post:
operationId: addUploadPart
tags:
- Uploads
summary: >
Adds a [Part](/docs/api-reference/uploads/part-object) to an
[Upload](/docs/api-reference/uploads/object) object. A Part represents a
chunk of bytes from the file you are trying to upload.
Each Part can be at most 64 MB, and you can add Parts until you hit the
Upload maximum of 8 GB.
It is possible to add multiple Parts in parallel. You can decide the
intended order of the Parts when you [complete the
Upload](/docs/api-reference/uploads/complete).
parameters:
- in: path
name: upload_id
required: true
schema:
type: string
example: upload_abc123
description: |
The ID of the Upload.
requestBody:
required: true
content:
multipart/form-data:
schema:
$ref: "#/components/schemas/AddUploadPartRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/UploadPart"
x-oaiMeta:
name: Add upload part
group: uploads
returns: The upload [Part](/docs/api-reference/uploads/part-object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/uploads/upload_abc123/parts
-F data="aHR0cHM6Ly9hcGkub3BlbmFpLmNvbS92MS91cGxvYWRz..."
response: |
{
"id": "part_def456",
"object": "upload.part",
"created_at": 1719185911,
"upload_id": "upload_abc123"
}
/vector_stores:
get:
operationId: listVectorStores
tags:
- Vector stores
summary: Returns a list of vector stores.
parameters:
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListVectorStoresResponse"
x-oaiMeta:
name: List vector stores
group: vector_stores
returns: A list of [vector store](/docs/api-reference/vector-stores/object)
objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
vector_stores = client.vector_stores.list()
print(vector_stores)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStores = await openai.vectorStores.list();
console.log(vectorStores);
}
main();
response: |
{
"object": "list",
"data": [
{
"id": "vs_abc123",
"object": "vector_store",
"created_at": 1699061776,
"name": "Support FAQ",
"bytes": 139920,
"file_counts": {
"in_progress": 0,
"completed": 3,
"failed": 0,
"cancelled": 0,
"total": 3
}
},
{
"id": "vs_abc456",
"object": "vector_store",
"created_at": 1699061776,
"name": "Support FAQ v2",
"bytes": 139920,
"file_counts": {
"in_progress": 0,
"completed": 3,
"failed": 0,
"cancelled": 0,
"total": 3
}
}
],
"first_id": "vs_abc123",
"last_id": "vs_abc456",
"has_more": false
}
post:
operationId: createVectorStore
tags:
- Vector stores
summary: Create a vector store.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateVectorStoreRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreObject"
x-oaiMeta:
name: Create vector store
group: vector_stores
returns: A [vector store](/docs/api-reference/vector-stores/object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"name": "Support FAQ"
}'
python: |
from openai import OpenAI
client = OpenAI()
vector_store = client.vector_stores.create(
name="Support FAQ"
)
print(vector_store)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStore = await openai.vectorStores.create({
name: "Support FAQ"
});
console.log(vectorStore);
}
main();
response: |
{
"id": "vs_abc123",
"object": "vector_store",
"created_at": 1699061776,
"name": "Support FAQ",
"bytes": 139920,
"file_counts": {
"in_progress": 0,
"completed": 3,
"failed": 0,
"cancelled": 0,
"total": 3
}
}
/vector_stores/{vector_store_id}:
get:
operationId: getVectorStore
tags:
- Vector stores
summary: Retrieves a vector store.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
description: The ID of the vector store to retrieve.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreObject"
x-oaiMeta:
name: Retrieve vector store
group: vector_stores
returns: The [vector store](/docs/api-reference/vector-stores/object) object
matching the specified ID.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
vector_store = client.vector_stores.retrieve(
vector_store_id="vs_abc123"
)
print(vector_store)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStore = await openai.vectorStores.retrieve(
"vs_abc123"
);
console.log(vectorStore);
}
main();
response: |
{
"id": "vs_abc123",
"object": "vector_store",
"created_at": 1699061776
}
post:
operationId: modifyVectorStore
tags:
- Vector stores
summary: Modifies a vector store.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
description: The ID of the vector store to modify.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/UpdateVectorStoreRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreObject"
x-oaiMeta:
name: Modify vector store
group: vector_stores
returns: The modified [vector store](/docs/api-reference/vector-stores/object)
object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
-d '{
"name": "Support FAQ"
}'
python: |
from openai import OpenAI
client = OpenAI()
vector_store = client.vector_stores.update(
vector_store_id="vs_abc123",
name="Support FAQ"
)
print(vector_store)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStore = await openai.vectorStores.update(
"vs_abc123",
{
name: "Support FAQ"
}
);
console.log(vectorStore);
}
main();
response: |
{
"id": "vs_abc123",
"object": "vector_store",
"created_at": 1699061776,
"name": "Support FAQ",
"bytes": 139920,
"file_counts": {
"in_progress": 0,
"completed": 3,
"failed": 0,
"cancelled": 0,
"total": 3
}
}
delete:
operationId: deleteVectorStore
tags:
- Vector stores
summary: Delete a vector store.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
description: The ID of the vector store to delete.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteVectorStoreResponse"
x-oaiMeta:
name: Delete vector store
group: vector_stores
returns: Deletion status
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-X DELETE
python: |
from openai import OpenAI
client = OpenAI()
deleted_vector_store = client.vector_stores.delete(
vector_store_id="vs_abc123"
)
print(deleted_vector_store)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const deletedVectorStore = await openai.vectorStores.del(
"vs_abc123"
);
console.log(deletedVectorStore);
}
main();
response: |
{
id: "vs_abc123",
object: "vector_store.deleted",
deleted: true
}
/vector_stores/{vector_store_id}/file_batches:
post:
operationId: createVectorStoreFileBatch
tags:
- Vector stores
summary: Create a vector store file batch.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: |
The ID of the vector store for which to create a File Batch.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateVectorStoreFileBatchRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileBatchObject"
x-oaiMeta:
name: Create vector store file batch
group: vector_stores
returns: A [vector store file
batch](/docs/api-reference/vector-stores-file-batches/batch-object)
object.
examples:
request:
curl: >
curl
https://api.openai.com/v1/vector_stores/vs_abc123/file_batches \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"file_ids": ["file-abc123", "file-abc456"]
}'
python: >
from openai import OpenAI
client = OpenAI()
vector_store_file_batch =
client.vector_stores.file_batches.create(
vector_store_id="vs_abc123",
file_ids=["file-abc123", "file-abc456"]
)
print(vector_store_file_batch)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myVectorStoreFileBatch = await openai.vectorStores.fileBatches.create(
"vs_abc123",
{
file_ids: ["file-abc123", "file-abc456"]
}
);
console.log(myVectorStoreFileBatch);
}
main();
response: |
{
"id": "vsfb_abc123",
"object": "vector_store.file_batch",
"created_at": 1699061776,
"vector_store_id": "vs_abc123",
"status": "in_progress",
"file_counts": {
"in_progress": 1,
"completed": 1,
"failed": 0,
"cancelled": 0,
"total": 0,
}
}
/vector_stores/{vector_store_id}/file_batches/{batch_id}:
get:
operationId: getVectorStoreFileBatch
tags:
- Vector stores
summary: Retrieves a vector store file batch.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: The ID of the vector store that the file batch belongs to.
- in: path
name: batch_id
required: true
schema:
type: string
example: vsfb_abc123
description: The ID of the file batch being retrieved.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileBatchObject"
x-oaiMeta:
name: Retrieve vector store file batch
group: vector_stores
returns: The [vector store file
batch](/docs/api-reference/vector-stores-file-batches/batch-object)
object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files_batches/vsfb_abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: >
from openai import OpenAI
client = OpenAI()
vector_store_file_batch =
client.vector_stores.file_batches.retrieve(
vector_store_id="vs_abc123",
batch_id="vsfb_abc123"
)
print(vector_store_file_batch)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStoreFileBatch = await openai.vectorStores.fileBatches.retrieve(
"vs_abc123",
"vsfb_abc123"
);
console.log(vectorStoreFileBatch);
}
main();
response: |
{
"id": "vsfb_abc123",
"object": "vector_store.file_batch",
"created_at": 1699061776,
"vector_store_id": "vs_abc123",
"status": "in_progress",
"file_counts": {
"in_progress": 1,
"completed": 1,
"failed": 0,
"cancelled": 0,
"total": 0,
}
}
/vector_stores/{vector_store_id}/file_batches/{batch_id}/cancel:
post:
operationId: cancelVectorStoreFileBatch
tags:
- Vector stores
summary: Cancel a vector store file batch. This attempts to cancel the
processing of files in this batch as soon as possible.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
description: The ID of the vector store that the file batch belongs to.
- in: path
name: batch_id
required: true
schema:
type: string
description: The ID of the file batch to cancel.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileBatchObject"
x-oaiMeta:
name: Cancel vector store file batch
group: vector_stores
returns: The modified vector store file batch object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files_batches/vsfb_abc123/cancel \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-X POST
python: >
from openai import OpenAI
client = OpenAI()
deleted_vector_store_file_batch =
client.vector_stores.file_batches.cancel(
vector_store_id="vs_abc123",
file_batch_id="vsfb_abc123"
)
print(deleted_vector_store_file_batch)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const deletedVectorStoreFileBatch = await openai.vectorStores.fileBatches.cancel(
"vs_abc123",
"vsfb_abc123"
);
console.log(deletedVectorStoreFileBatch);
}
main();
response: |
{
"id": "vsfb_abc123",
"object": "vector_store.file_batch",
"created_at": 1699061776,
"vector_store_id": "vs_abc123",
"status": "in_progress",
"file_counts": {
"in_progress": 12,
"completed": 3,
"failed": 0,
"cancelled": 0,
"total": 15,
}
}
/vector_stores/{vector_store_id}/file_batches/{batch_id}/files:
get:
operationId: listFilesInVectorStoreBatch
tags:
- Vector stores
summary: Returns a list of vector store files in a batch.
parameters:
- name: vector_store_id
in: path
description: The ID of the vector store that the files belong to.
required: true
schema:
type: string
- name: batch_id
in: path
description: The ID of the file batch that the files belong to.
required: true
schema:
type: string
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
- name: filter
in: query
description: Filter by file status. One of `in_progress`, `completed`, `failed`,
`cancelled`.
schema:
type: string
enum:
- in_progress
- completed
- failed
- cancelled
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListVectorStoreFilesResponse"
x-oaiMeta:
name: List vector store files in a batch
group: vector_stores
returns: A list of [vector store
file](/docs/api-reference/vector-stores-files/file-object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files_batches/vsfb_abc123/files \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
vector_store_files = client.vector_stores.file_batches.list_files(
vector_store_id="vs_abc123",
batch_id="vsfb_abc123"
)
print(vector_store_files)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStoreFiles = await openai.vectorStores.fileBatches.listFiles(
"vs_abc123",
"vsfb_abc123"
);
console.log(vectorStoreFiles);
}
main();
response: |
{
"object": "list",
"data": [
{
"id": "file-abc123",
"object": "vector_store.file",
"created_at": 1699061776,
"vector_store_id": "vs_abc123"
},
{
"id": "file-abc456",
"object": "vector_store.file",
"created_at": 1699061776,
"vector_store_id": "vs_abc123"
}
],
"first_id": "file-abc123",
"last_id": "file-abc456",
"has_more": false
}
/vector_stores/{vector_store_id}/files:
get:
operationId: listVectorStoreFiles
tags:
- Vector stores
summary: Returns a list of vector store files.
parameters:
- name: vector_store_id
in: path
description: The ID of the vector store that the files belong to.
required: true
schema:
type: string
- name: limit
in: query
description: >
A limit on the number of objects to be returned. Limit can range
between 1 and 100, and the default is 20.
required: false
schema:
type: integer
default: 20
- name: order
in: query
description: >
Sort order by the `created_at` timestamp of the objects. `asc` for
ascending order and `desc` for descending order.
schema:
type: string
default: desc
enum:
- asc
- desc
- name: after
in: query
description: >
A cursor for use in pagination. `after` is an object ID that defines
your place in the list. For instance, if you make a list request and
receive 100 objects, ending with obj_foo, your subsequent call can
include after=obj_foo in order to fetch the next page of the list.
schema:
type: string
- name: before
in: query
description: >
A cursor for use in pagination. `before` is an object ID that
defines your place in the list. For instance, if you make a list
request and receive 100 objects, starting with obj_foo, your
subsequent call can include before=obj_foo in order to fetch the
previous page of the list.
schema:
type: string
- name: filter
in: query
description: Filter by file status. One of `in_progress`, `completed`, `failed`,
`cancelled`.
schema:
type: string
enum:
- in_progress
- completed
- failed
- cancelled
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/ListVectorStoreFilesResponse"
x-oaiMeta:
name: List vector store files
group: vector_stores
returns: A list of [vector store
file](/docs/api-reference/vector-stores-files/file-object) objects.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
vector_store_files = client.vector_stores.files.list(
vector_store_id="vs_abc123"
)
print(vector_store_files)
node.js: |
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStoreFiles = await openai.vectorStores.files.list(
"vs_abc123"
);
console.log(vectorStoreFiles);
}
main();
response: |
{
"object": "list",
"data": [
{
"id": "file-abc123",
"object": "vector_store.file",
"created_at": 1699061776,
"vector_store_id": "vs_abc123"
},
{
"id": "file-abc456",
"object": "vector_store.file",
"created_at": 1699061776,
"vector_store_id": "vs_abc123"
}
],
"first_id": "file-abc123",
"last_id": "file-abc456",
"has_more": false
}
post:
operationId: createVectorStoreFile
tags:
- Vector stores
summary: Create a vector store file by attaching a
[File](/docs/api-reference/files) to a [vector
store](/docs/api-reference/vector-stores/object).
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: |
The ID of the vector store for which to create a File.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/CreateVectorStoreFileRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileObject"
x-oaiMeta:
name: Create vector store file
group: vector_stores
returns: A [vector store
file](/docs/api-reference/vector-stores-files/file-object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-d '{
"file_id": "file-abc123"
}'
python: |
from openai import OpenAI
client = OpenAI()
vector_store_file = client.vector_stores.files.create(
vector_store_id="vs_abc123",
file_id="file-abc123"
)
print(vector_store_file)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const myVectorStoreFile = await openai.vectorStores.files.create(
"vs_abc123",
{
file_id: "file-abc123"
}
);
console.log(myVectorStoreFile);
}
main();
response: |
{
"id": "file-abc123",
"object": "vector_store.file",
"created_at": 1699061776,
"usage_bytes": 1234,
"vector_store_id": "vs_abcd",
"status": "completed",
"last_error": null
}
/vector_stores/{vector_store_id}/files/{file_id}:
get:
operationId: getVectorStoreFile
tags:
- Vector stores
summary: Retrieves a vector store file.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: The ID of the vector store that the file belongs to.
- in: path
name: file_id
required: true
schema:
type: string
example: file-abc123
description: The ID of the file being retrieved.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileObject"
x-oaiMeta:
name: Retrieve vector store file
group: vector_stores
returns: The [vector store
file](/docs/api-reference/vector-stores-files/file-object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files/file-abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2"
python: |
from openai import OpenAI
client = OpenAI()
vector_store_file = client.vector_stores.files.retrieve(
vector_store_id="vs_abc123",
file_id="file-abc123"
)
print(vector_store_file)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const vectorStoreFile = await openai.vectorStores.files.retrieve(
"vs_abc123",
"file-abc123"
);
console.log(vectorStoreFile);
}
main();
response: |
{
"id": "file-abc123",
"object": "vector_store.file",
"created_at": 1699061776,
"vector_store_id": "vs_abcd",
"status": "completed",
"last_error": null
}
delete:
operationId: deleteVectorStoreFile
tags:
- Vector stores
summary: Delete a vector store file. This will remove the file from the vector
store but the file itself will not be deleted. To delete the file, use
the [delete file](/docs/api-reference/files/delete) endpoint.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
description: The ID of the vector store that the file belongs to.
- in: path
name: file_id
required: true
schema:
type: string
description: The ID of the file to delete.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/DeleteVectorStoreFileResponse"
x-oaiMeta:
name: Delete vector store file
group: vector_stores
returns: Deletion status
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/vs_abc123/files/file-abc123 \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-H "OpenAI-Beta: assistants=v2" \
-X DELETE
python: |
from openai import OpenAI
client = OpenAI()
deleted_vector_store_file = client.vector_stores.files.delete(
vector_store_id="vs_abc123",
file_id="file-abc123"
)
print(deleted_vector_store_file)
node.js: >
import OpenAI from "openai";
const openai = new OpenAI();
async function main() {
const deletedVectorStoreFile = await openai.vectorStores.files.del(
"vs_abc123",
"file-abc123"
);
console.log(deletedVectorStoreFile);
}
main();
response: |
{
id: "file-abc123",
object: "vector_store.file.deleted",
deleted: true
}
post:
operationId: updateVectorStoreFileAttributes
tags:
- Vector stores
summary: Update attributes on a vector store file.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: The ID of the vector store the file belongs to.
- in: path
name: file_id
required: true
schema:
type: string
example: file-abc123
description: The ID of the file to update attributes.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/UpdateVectorStoreFileAttributesRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileObject"
x-oaiMeta:
name: Update vector store file attributes
group: vector_stores
returns: The updated [vector store
file](/docs/api-reference/vector-stores-files/file-object) object.
examples:
request:
curl: |
curl https://api.openai.com/v1/vector_stores/{vector_store_id}/files/{file_id} \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"attributes": {"key1": "value1", "key2": 2}}'
response: |
{
"id": "file-abc123",
"object": "vector_store.file",
"usage_bytes": 1234,
"created_at": 1699061776,
"vector_store_id": "vs_abcd",
"status": "completed",
"last_error": null,
"chunking_strategy": {...},
"attributes": {"key1": "value1", "key2": 2}
}
/vector_stores/{vector_store_id}/files/{file_id}/content:
get:
operationId: retrieveVectorStoreFileContent
tags:
- Vector stores
summary: Retrieve the parsed contents of a vector store file.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: The ID of the vector store.
- in: path
name: file_id
required: true
schema:
type: string
example: file-abc123
description: The ID of the file within the vector store.
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreFileContentResponse"
x-oaiMeta:
name: Retrieve vector store file content
group: vector_stores
returns: The parsed contents of the specified vector store file.
examples:
request:
curl: |
curl \
https://api.openai.com/v1/vector_stores/vs_abc123/files/file-abc123/content \
-H "Authorization: Bearer $OPENAI_API_KEY"
response: |
{
"file_id": "file-abc123",
"filename": "example.txt",
"attributes": {"key": "value"},
"content": [
{"type": "text", "text": "..."},
...
]
}
/vector_stores/{vector_store_id}/search:
post:
operationId: searchVectorStore
tags:
- Vector stores
summary: Search a vector store for relevant chunks based on a query and file
attributes filter.
parameters:
- in: path
name: vector_store_id
required: true
schema:
type: string
example: vs_abc123
description: The ID of the vector store to search.
requestBody:
required: true
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreSearchRequest"
responses:
"200":
description: OK
content:
application/json:
schema:
$ref: "#/components/schemas/VectorStoreSearchResultsPage"
x-oaiMeta:
name: Search vector store
group: vector_stores
returns: A page of search results from the vector store.
examples:
request:
curl: |
curl -X POST \
https://api.openai.com/v1/vector_stores/vs_abc123/search \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-H "Content-Type: application/json" \
-d '{"query": "What is the return policy?", "filters": {...}}'
response: |
{
"object": "vector_store.search_results.page",
"search_query": "What is the return policy?",
"data": [
{
"file_id": "file_123",
"filename": "document.pdf",
"score": 0.95,
"attributes": {
"author": "John Doe",
"date": "2023-01-01"
},
"content": [
{
"type": "text",
"text": "Relevant chunk"
}
]
},
{
"file_id": "file_456",
"filename": "notes.txt",
"score": 0.89,
"attributes": {
"author": "Jane Smith",
"date": "2023-01-02"
},
"content": [
{
"type": "text",
"text": "Sample text content from the vector store."
}
]
}
],
"has_more": false,
"next_page": null
}
components:
schemas:
AddUploadPartRequest:
type: object
additionalProperties: false
properties:
data:
description: |
The chunk of bytes for this Part.
type: string
format: binary
required:
- data
AdminApiKey:
type: object
description: Represents an individual Admin API key in an org.
properties:
object:
type: string
example: organization.admin_api_key
description: The object type, which is always `organization.admin_api_key`
x-stainless-const: true
id:
type: string
example: key_abc
description: The identifier, which can be referenced in API endpoints
name:
type: string
example: Administration Key
description: The name of the API key
redacted_value:
type: string
example: sk-admin...def
description: The redacted value of the API key
value:
type: string
example: sk-admin-1234abcd
description: The value of the API key. Only shown on create.
created_at:
type: integer
format: int64
example: 1711471533
description: The Unix timestamp (in seconds) of when the API key was created
last_used_at:
type: integer
format: int64
nullable: true
example: 1711471534
description: The Unix timestamp (in seconds) of when the API key was last used
owner:
type: object
properties:
type:
type: string
example: user
description: Always `user`
object:
type: string
example: organization.user
description: The object type, which is always organization.user
id:
type: string
example: sa_456
description: The identifier, which can be referenced in API endpoints
name:
type: string
example: My Service Account
description: The name of the user
created_at:
type: integer
format: int64
example: 1711471533
description: The Unix timestamp (in seconds) of when the user was created
role:
type: string
example: owner
description: Always `owner`
required:
- object
- redacted_value
- name
- created_at
- last_used_at
- id
- owner
x-oaiMeta:
name: The admin API key object
example: |
{
"object": "organization.admin_api_key",
"id": "key_abc",
"name": "Main Admin Key",
"redacted_value": "sk-admin...xyz",
"created_at": 1711471533,
"last_used_at": 1711471534,
"owner": {
"type": "user",
"object": "organization.user",
"id": "user_123",
"name": "John Doe",
"created_at": 1711471533,
"role": "owner"
}
}
ApiKeyList:
type: object
properties:
object:
type: string
example: list
data:
type: array
items:
$ref: "#/components/schemas/AdminApiKey"
has_more:
type: boolean
example: false
first_id:
type: string
example: key_abc
last_id:
type: string
example: key_xyz
AssistantObject:
type: object
title: Assistant
description: Represents an `assistant` that can call the model and use tools.
properties:
id:
description: The identifier, which can be referenced in API endpoints.
type: string
object:
description: The object type, which is always `assistant`.
type: string
enum:
- assistant
x-stainless-const: true
created_at:
description: The Unix timestamp (in seconds) for when the assistant was created.
type: integer
name:
description: |
The name of the assistant. The maximum length is 256 characters.
type: string
maxLength: 256
nullable: true
description:
description: >
The description of the assistant. The maximum length is 512
characters.
type: string
maxLength: 512
nullable: true
model:
description: >
ID of the model to use. You can use the [List
models](/docs/api-reference/models/list) API to see all of your
available models, or see our [Model overview](/docs/models) for
descriptions of them.
type: string
instructions:
description: >
The system instructions that the assistant uses. The maximum length
is 256,000 characters.
type: string
maxLength: 256000
nullable: true
tools:
description: >
A list of tool enabled on the assistant. There can be a maximum of
128 tools per assistant. Tools can be of types `code_interpreter`,
`file_search`, or `function`.
default: []
type: array
maxItems: 128
items:
oneOf:
- $ref: "#/components/schemas/AssistantToolsCode"
- $ref: "#/components/schemas/AssistantToolsFileSearch"
- $ref: "#/components/schemas/AssistantToolsFunction"
tool_resources:
type: object
description: >
A set of resources that are used by the assistant's tools. The
resources are specific to the type of tool. For example, the
`code_interpreter` tool requires a list of file IDs, while the
`file_search` tool requires a list of vector store IDs.
properties:
code_interpreter:
type: object
properties:
file_ids:
type: array
description: >
A list of [file](/docs/api-reference/files) IDs made
available to the `code_interpreter`` tool. There can be a
maximum of 20 files associated with the tool.
default: []
maxItems: 20
items:
type: string
file_search:
type: object
properties:
vector_store_ids:
type: array
description: >
The ID of the [vector
store](/docs/api-reference/vector-stores/object) attached to
this assistant. There can be a maximum of 1 vector store
attached to the assistant.
maxItems: 1
items:
type: string
nullable: true
metadata:
$ref: "#/components/schemas/Metadata"
temperature:
description: >
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
top_p:
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
description: >
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
response_format:
$ref: "#/components/schemas/AssistantsApiResponseFormatOption"
nullable: true
required:
- id
- object
- created_at
- name
- description
- model
- instructions
- tools
- metadata
x-oaiMeta:
name: The assistant object
beta: true
example: >
{
"id": "asst_abc123",
"object": "assistant",
"created_at": 1698984975,
"name": "Math Tutor",
"description": null,
"model": "gpt-4o",
"instructions": "You are a personal math tutor. When asked a question, write and run Python code to answer the question.",
"tools": [
{
"type": "code_interpreter"
}
],
"metadata": {},
"top_p": 1.0,
"temperature": 1.0,
"response_format": "auto"
}
AssistantStreamEvent:
description: >
Represents an event emitted when streaming a Run.
Each event in a server-sent events stream has an `event` and `data`
property:
```
event: thread.created
data: {"id": "thread_123", "object": "thread", ...}
```
We emit events whenever a new object is created, transitions to a new
state, or is being
streamed in parts (deltas). For example, we emit `thread.run.created`
when a new run
is created, `thread.run.completed` when a run completes, and so on. When
an Assistant chooses
to create a message during a run, we emit a `thread.message.created
event`, a
`thread.message.in_progress` event, many `thread.message.delta` events,
and finally a
`thread.message.completed` event.
We may add additional events over time, so we recommend handling unknown
events gracefully
in your code. See the [Assistants API
quickstart](/docs/assistants/overview) to learn how to
integrate the Assistants API with streaming.
oneOf:
- $ref: "#/components/schemas/ThreadStreamEvent"
- $ref: "#/components/schemas/RunStreamEvent"
- $ref: "#/components/schemas/RunStepStreamEvent"
- $ref: "#/components/schemas/MessageStreamEvent"
- $ref: "#/components/schemas/ErrorEvent"
- $ref: "#/components/schemas/DoneEvent"
x-oaiMeta:
name: Assistant stream events
beta: true
AssistantSupportedModels:
type: string
enum:
- gpt-4.1
- gpt-4.1-mini
- gpt-4.1-nano
- gpt-4.1-2025-04-14
- gpt-4.1-mini-2025-04-14
- gpt-4.1-nano-2025-04-14
- o3-mini
- o3-mini-2025-01-31
- o1
- o1-2024-12-17
- gpt-4o
- gpt-4o-2024-11-20
- gpt-4o-2024-08-06
- gpt-4o-2024-05-13
- gpt-4o-mini
- gpt-4o-mini-2024-07-18
- gpt-4.5-preview
- gpt-4.5-preview-2025-02-27
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-4-vision-preview
- gpt-4
- gpt-4-0314
- gpt-4-0613
- gpt-4-32k
- gpt-4-32k-0314
- gpt-4-32k-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-16k-0613
AssistantToolsCode:
type: object
title: Code interpreter tool
properties:
type:
type: string
description: "The type of tool being defined: `code_interpreter`"
enum:
- code_interpreter
x-stainless-const: true
required:
- type
AssistantToolsFileSearch:
type: object
title: FileSearch tool
properties:
type:
type: string
description: "The type of tool being defined: `file_search`"
enum:
- file_search
x-stainless-const: true
file_search:
type: object
description: Overrides for the file search tool.
properties:
max_num_results:
type: integer
minimum: 1
maximum: 50
description: |
The maximum number of results the file search tool should output. The default is 20 for `gpt-4*` models and 5 for `gpt-3.5-turbo`. This number should be between 1 and 50 inclusive.
Note that the file search tool may output fewer than `max_num_results` results. See the [file search tool documentation](/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
ranking_options:
$ref: "#/components/schemas/FileSearchRankingOptions"
required:
- type
AssistantToolsFileSearchTypeOnly:
type: object
title: FileSearch tool
properties:
type:
type: string
description: "The type of tool being defined: `file_search`"
enum:
- file_search
x-stainless-const: true
required:
- type
AssistantToolsFunction:
type: object
title: Function tool
properties:
type:
type: string
description: "The type of tool being defined: `function`"
enum:
- function
x-stainless-const: true
function:
$ref: "#/components/schemas/FunctionObject"
required:
- type
- function
AssistantsApiResponseFormatOption:
description: >
Specifies the format that the model must output. Compatible with
[GPT-4o](/docs/models#gpt-4o), [GPT-4
Turbo](/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models
since `gpt-3.5-turbo-1106`.
Setting to `{ "type": "json_schema", "json_schema": {...} }` enables
Structured Outputs which ensures the model will match your supplied JSON
schema. Learn more in the [Structured Outputs
guide](/docs/guides/structured-outputs).
Setting to `{ "type": "json_object" }` enables JSON mode, which ensures
the message the model generates is valid JSON.
**Important:** when using JSON mode, you **must** also instruct the
model to produce JSON yourself via a system or user message. Without
this, the model may generate an unending stream of whitespace until the
generation reaches the token limit, resulting in a long-running and
seemingly "stuck" request. Also note that the message content may be
partially cut off if `finish_reason="length"`, which indicates the
generation exceeded `max_tokens` or the conversation exceeded the max
context length.
oneOf:
- type: string
description: |
`auto` is the default value
enum:
- auto
x-stainless-const: true
- $ref: "#/components/schemas/ResponseFormatText"
- $ref: "#/components/schemas/ResponseFormatJsonObject"
- $ref: "#/components/schemas/ResponseFormatJsonSchema"
AssistantsApiToolChoiceOption:
description: >
Controls which (if any) tool is called by the model.
`none` means the model will not call any tools and instead generates a
message.
`auto` is the default value and means the model can pick between
generating a message or calling one or more tools.
`required` means the model must call one or more tools before responding
to the user.
Specifying a particular tool like `{"type": "file_search"}` or `{"type":
"function", "function": {"name": "my_function"}}` forces the model to
call that tool.
oneOf:
- type: string
description: >
`none` means the model will not call any tools and instead generates
a message. `auto` means the model can pick between generating a
message or calling one or more tools. `required` means the model
must call one or more tools before responding to the user.
enum:
- none
- auto
- required
- $ref: "#/components/schemas/AssistantsNamedToolChoice"
AssistantsNamedToolChoice:
type: object
description: Specifies a tool the model should use. Use to force the model to
call a specific tool.
properties:
type:
type: string
enum:
- function
- code_interpreter
- file_search
description: The type of the tool. If type is `function`, the function name must
be set
function:
type: object
properties:
name:
type: string
description: The name of the function to call.
required:
- name
required:
- type
AudioResponseFormat:
description: >
The format of the output, in one of these options: `json`, `text`,
`srt`, `verbose_json`, or `vtt`. For `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe`, the only supported format is `json`.
type: string
enum:
- json
- text
- srt
- verbose_json
- vtt
default: json
AuditLog:
type: object
description: A log of a user action or configuration change within this organization.
properties:
id:
type: string
description: The ID of this log.
type:
$ref: "#/components/schemas/AuditLogEventType"
effective_at:
type: integer
description: The Unix timestamp (in seconds) of the event.
project:
type: object
description: The project that the action was scoped to. Absent for actions not
scoped to projects.
properties:
id:
type: string
description: The project ID.
name:
type: string
description: The project title.
actor:
$ref: "#/components/schemas/AuditLogActor"
api_key.created:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The tracking ID of the API key.
data:
type: object
description: The payload used to create the API key.
properties:
scopes:
type: array
items:
type: string
description: A list of scopes allowed for the API key, e.g.
`["api.model.request"]`
api_key.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The tracking ID of the API key.
changes_requested:
type: object
description: The payload used to update the API key.
properties:
scopes:
type: array
items:
type: string
description: A list of scopes allowed for the API key, e.g.
`["api.model.request"]`
api_key.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The tracking ID of the API key.
checkpoint_permission.created:
type: object
description: The project and fine-tuned model checkpoint that the checkpoint
permission was created for.
properties:
id:
type: string
description: The ID of the checkpoint permission.
data:
type: object
description: The payload used to create the checkpoint permission.
properties:
project_id:
type: string
description: The ID of the project that the checkpoint permission was created
for.
fine_tuned_model_checkpoint:
type: string
description: The ID of the fine-tuned model checkpoint.
checkpoint_permission.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The ID of the checkpoint permission.
invite.sent:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The ID of the invite.
data:
type: object
description: The payload used to create the invite.
properties:
email:
type: string
description: The email invited to the organization.
role:
type: string
description: The role the email was invited to be. Is either `owner` or
`member`.
invite.accepted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The ID of the invite.
invite.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The ID of the invite.
login.failed:
type: object
description: The details for events with this `type`.
properties:
error_code:
type: string
description: The error code of the failure.
error_message:
type: string
description: The error message of the failure.
logout.failed:
type: object
description: The details for events with this `type`.
properties:
error_code:
type: string
description: The error code of the failure.
error_message:
type: string
description: The error message of the failure.
organization.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The organization ID.
changes_requested:
type: object
description: The payload used to update the organization settings.
properties:
title:
type: string
description: The organization title.
description:
type: string
description: The organization description.
name:
type: string
description: The organization name.
settings:
type: object
properties:
threads_ui_visibility:
type: string
description: Visibility of the threads page which shows messages created with
the Assistants API and Playground. One of `ANY_ROLE`,
`OWNERS`, or `NONE`.
usage_dashboard_visibility:
type: string
description: Visibility of the usage dashboard which shows activity and costs
for your organization. One of `ANY_ROLE` or `OWNERS`.
project.created:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The project ID.
data:
type: object
description: The payload used to create the project.
properties:
name:
type: string
description: The project name.
title:
type: string
description: The title of the project as seen on the dashboard.
project.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The project ID.
changes_requested:
type: object
description: The payload used to update the project.
properties:
title:
type: string
description: The title of the project as seen on the dashboard.
project.archived:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The project ID.
rate_limit.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The rate limit ID
changes_requested:
type: object
description: The payload used to update the rate limits.
properties:
max_requests_per_1_minute:
type: integer
description: The maximum requests per minute.
max_tokens_per_1_minute:
type: integer
description: The maximum tokens per minute.
max_images_per_1_minute:
type: integer
description: The maximum images per minute. Only relevant for certain models.
max_audio_megabytes_per_1_minute:
type: integer
description: The maximum audio megabytes per minute. Only relevant for certain
models.
max_requests_per_1_day:
type: integer
description: The maximum requests per day. Only relevant for certain models.
batch_1_day_max_input_tokens:
type: integer
description: The maximum batch input tokens per day. Only relevant for certain
models.
rate_limit.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The rate limit ID
service_account.created:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The service account ID.
data:
type: object
description: The payload used to create the service account.
properties:
role:
type: string
description: The role of the service account. Is either `owner` or `member`.
service_account.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The service account ID.
changes_requested:
type: object
description: The payload used to updated the service account.
properties:
role:
type: string
description: The role of the service account. Is either `owner` or `member`.
service_account.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The service account ID.
user.added:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The user ID.
data:
type: object
description: The payload used to add the user to the project.
properties:
role:
type: string
description: The role of the user. Is either `owner` or `member`.
user.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The project ID.
changes_requested:
type: object
description: The payload used to update the user.
properties:
role:
type: string
description: The role of the user. Is either `owner` or `member`.
user.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The user ID.
certificate.created:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The certificate ID.
name:
type: string
description: The name of the certificate.
certificate.updated:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The certificate ID.
name:
type: string
description: The name of the certificate.
certificate.deleted:
type: object
description: The details for events with this `type`.
properties:
id:
type: string
description: The certificate ID.
name:
type: string
description: The name of the certificate.
certificate:
type: string
description: The certificate content in PEM format.
certificates.activated:
type: object
description: The details for events with this `type`.
properties:
certificates:
type: array
items:
type: object
properties:
id:
type: string
description: The certificate ID.
name:
type: string
description: The name of the certificate.
certificates.deactivated:
type: object
description: The details for events with this `type`.
properties:
certificates:
type: array
items:
type: object
properties:
id:
type: string
description: The certificate ID.
name:
type: string
description: The name of the certificate.
required:
- id
- type
- effective_at
- actor
x-oaiMeta:
name: The audit log object
example: >
{
"id": "req_xxx_20240101",
"type": "api_key.created",
"effective_at": 1720804090,
"actor": {
"type": "session",
"session": {
"user": {
"id": "user-xxx",
"email": "[email protected]"
},
"ip_address": "127.0.0.1",
"user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36"
}
},
"api_key.created": {
"id": "key_xxxx",
"data": {
"scopes": ["resource.operation"]
}
}
}
AuditLogActor:
type: object
description: The actor who performed the audit logged action.
properties:
type:
type: string
description: The type of actor. Is either `session` or `api_key`.
enum:
- session
- api_key
session:
$ref: "#/components/schemas/AuditLogActorSession"
api_key:
$ref: "#/components/schemas/AuditLogActorApiKey"
AuditLogActorApiKey:
type: object
description: The API Key used to perform the audit logged action.
properties:
id:
type: string
description: The tracking id of the API key.
type:
type: string
description: The type of API key. Can be either `user` or `service_account`.
enum:
- user
- service_account
user:
$ref: "#/components/schemas/AuditLogActorUser"
service_account:
$ref: "#/components/schemas/AuditLogActorServiceAccount"
AuditLogActorServiceAccount:
type: object
description: The service account that performed the audit logged action.
properties:
id:
type: string
description: The service account id.
AuditLogActorSession:
type: object
description: The session in which the audit logged action was performed.
properties:
user:
$ref: "#/components/schemas/AuditLogActorUser"
ip_address:
type: string
description: The IP address from which the action was performed.
AuditLogActorUser:
type: object
description: The user who performed the audit logged action.
properties:
id:
type: string
description: The user id.
email:
type: string
description: The user email.
AuditLogEventType:
type: string
description: The event type.
enum:
- api_key.created
- api_key.updated
- api_key.deleted
- checkpoint_permission.created
- checkpoint_permission.deleted
- invite.sent
- invite.accepted
- invite.deleted
- login.succeeded
- login.failed
- logout.succeeded
- logout.failed
- organization.updated
- project.created
- project.updated
- project.archived
- service_account.created
- service_account.updated
- service_account.deleted
- rate_limit.updated
- rate_limit.deleted
- user.added
- user.updated
- user.deleted
AutoChunkingStrategyRequestParam:
type: object
title: Auto Chunking Strategy
description: The default strategy. This strategy currently uses a
`max_chunk_size_tokens` of `800` and `chunk_overlap_tokens` of `400`.
additionalProperties: false
properties:
type:
type: string
description: Always `auto`.
enum:
- auto
x-stainless-const: true
required:
- type
Batch:
type: object
properties:
id:
type: string
object:
type: string
enum:
- batch
description: The object type, which is always `batch`.
x-stainless-const: true
endpoint:
type: string
description: The OpenAI API endpoint used by the batch.
errors:
type: object
properties:
object:
type: string
description: The object type, which is always `list`.
data:
type: array
items:
type: object
properties:
code:
type: string
description: An error code identifying the error type.
message:
type: string
description: A human-readable message providing more details about the error.
param:
type: string
description: The name of the parameter that caused the error, if applicable.
nullable: true
line:
type: integer
description: The line number of the input file where the error occurred, if
applicable.
nullable: true
input_file_id:
type: string
description: The ID of the input file for the batch.
completion_window:
type: string
description: The time frame within which the batch should be processed.
status:
type: string
description: The current status of the batch.
enum:
- validating
- failed
- in_progress
- finalizing
- completed
- expired
- cancelling
- cancelled
output_file_id:
type: string
description: The ID of the file containing the outputs of successfully executed
requests.
error_file_id:
type: string
description: The ID of the file containing the outputs of requests with errors.
created_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch was created.
in_progress_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch started
processing.
expires_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch will expire.
finalizing_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch started
finalizing.
completed_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch was completed.
failed_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch failed.
expired_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch expired.
cancelling_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch started
cancelling.
cancelled_at:
type: integer
description: The Unix timestamp (in seconds) for when the batch was cancelled.
request_counts:
type: object
properties:
total:
type: integer
description: Total number of requests in the batch.
completed:
type: integer
description: Number of requests that have been completed successfully.
failed:
type: integer
description: Number of requests that have failed.
required:
- total
- completed
- failed
description: The request counts for different statuses within the batch.
metadata:
$ref: "#/components/schemas/Metadata"
required:
- id
- object
- endpoint
- input_file_id
- completion_window
- status
- created_at
x-oaiMeta:
name: The batch object
example: |
{
"id": "batch_abc123",
"object": "batch",
"endpoint": "/v1/completions",
"errors": null,
"input_file_id": "file-abc123",
"completion_window": "24h",
"status": "completed",
"output_file_id": "file-cvaTdG",
"error_file_id": "file-HOWS94",
"created_at": 1711471533,
"in_progress_at": 1711471538,
"expires_at": 1711557933,
"finalizing_at": 1711493133,
"completed_at": 1711493163,
"failed_at": null,
"expired_at": null,
"cancelling_at": null,
"cancelled_at": null,
"request_counts": {
"total": 100,
"completed": 95,
"failed": 5
},
"metadata": {
"customer_id": "user_123456789",
"batch_description": "Nightly eval job",
}
}
BatchRequestInput:
type: object
description: The per-line object of the batch input file
properties:
custom_id:
type: string
description: A developer-provided per-request id that will be used to match
outputs to inputs. Must be unique for each request in a batch.
method:
type: string
enum:
- POST
description: The HTTP method to be used for the request. Currently only `POST`
is supported.
x-stainless-const: true
url:
type: string
description: The OpenAI API relative URL to be used for the request. Currently
`/v1/chat/completions`, `/v1/embeddings`, and `/v1/completions` are
supported.
x-oaiMeta:
name: The request input object
example: >
{"custom_id": "request-1", "method": "POST", "url":
"/v1/chat/completions", "body": {"model": "gpt-4o-mini", "messages":
[{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What is 2+2?"}]}}
BatchRequestOutput:
type: object
description: The per-line object of the batch output and error files
properties:
id:
type: string
custom_id:
type: string
description: A developer-provided per-request id that will be used to match
outputs to inputs.
response:
type: object
nullable: true
properties:
status_code:
type: integer
description: The HTTP status code of the response
request_id:
type: string
description: An unique identifier for the OpenAI API request. Please include
this request ID when contacting support.
body:
type: object
x-oaiTypeLabel: map
description: The JSON body of the response
error:
type: object
nullable: true
description: For requests that failed with a non-HTTP error, this will contain
more information on the cause of the failure.
properties:
code:
type: string
description: A machine-readable error code.
message:
type: string
description: A human-readable error message.
x-oaiMeta:
name: The request output object
example: >
{"id": "batch_req_wnaDys", "custom_id": "request-2", "response":
{"status_code": 200, "request_id": "req_c187b3", "body": {"id":
"chatcmpl-9758Iw", "object": "chat.completion", "created": 1711475054,
"model": "gpt-4o-mini", "choices": [{"index": 0, "message": {"role":
"assistant", "content": "2 + 2 equals 4."}, "finish_reason": "stop"}],
"usage": {"prompt_tokens": 24, "completion_tokens": 15,
"total_tokens": 39}, "system_fingerprint": null}}, "error": null}
Certificate:
type: object
description: Represents an individual `certificate` uploaded to the organization.
properties:
object:
type: string
enum:
- certificate
- organization.certificate
- organization.project.certificate
description: >
The object type.
- If creating, updating, or getting a specific certificate, the
object type is `certificate`.
- If listing, activating, or deactivating certificates for the
organization, the object type is `organization.certificate`.
- If listing, activating, or deactivating certificates for a
project, the object type is `organization.project.certificate`.
x-stainless-const: true
id:
type: string
description: The identifier, which can be referenced in API endpoints
name:
type: string
description: The name of the certificate.
created_at:
type: integer
description: The Unix timestamp (in seconds) of when the certificate was uploaded.
certificate_details:
type: object
properties:
valid_at:
type: integer
description: The Unix timestamp (in seconds) of when the certificate becomes
valid.
expires_at:
type: integer
description: The Unix timestamp (in seconds) of when the certificate expires.
content:
type: string
description: The content of the certificate in PEM format.
active:
type: boolean
description: Whether the certificate is currently active at the specified scope.
Not returned when getting details for a specific certificate.
required:
- object
- id
- name
- created_at
- certificate_details
x-oaiMeta:
name: The certificate object
example: >
{
"object": "certificate",
"id": "cert_abc",
"name": "My Certificate",
"created_at": 1234567,
"certificate_details": {
"valid_at": 1234567,
"expires_at": 12345678,
"content": "-----BEGIN CERTIFICATE----- MIIGAjCCA...6znFlOW+ -----END CERTIFICATE-----"
}
}
ChatCompletionDeleted:
type: object
properties:
object:
type: string
description: The type of object being deleted.
enum:
- chat.completion.deleted
x-stainless-const: true
id:
type: string
description: The ID of the chat completion that was deleted.
deleted:
type: boolean
description: Whether the chat completion was deleted.
required:
- object
- id
- deleted
ChatCompletionFunctionCallOption:
type: object
description: >
Specifying a particular function via `{"name": "my_function"}` forces
the model to call that function.
properties:
name:
type: string
description: The name of the function to call.
required:
- name
ChatCompletionFunctions:
type: object
deprecated: true
properties:
description:
type: string
description: A description of what the function does, used by the model to
choose when and how to call the function.
name:
type: string
description: The name of the function to be called. Must be a-z, A-Z, 0-9, or
contain underscores and dashes, with a maximum length of 64.
parameters:
$ref: "#/components/schemas/FunctionParameters"
required:
- name
ChatCompletionList:
type: object
title: ChatCompletionList
description: |
An object representing a list of Chat Completions.
properties:
object:
type: string
enum:
- list
default: list
description: |
The type of this object. It is always set to "list".
x-stainless-const: true
data:
type: array
description: |
An array of chat completion objects.
items:
$ref: "#/components/schemas/CreateChatCompletionResponse"
first_id:
type: string
description: The identifier of the first chat completion in the data array.
last_id:
type: string
description: The identifier of the last chat completion in the data array.
has_more:
type: boolean
description: Indicates whether there are more Chat Completions available.
required:
- object
- data
- first_id
- last_id
- has_more
x-oaiMeta:
name: The chat completion list object
group: chat
example: >
{
"object": "list",
"data": [
{
"object": "chat.completion",
"id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"model": "gpt-4o-2024-08-06",
"created": 1738960610,
"request_id": "req_ded8ab984ec4bf840f37566c1011c417",
"tool_choice": null,
"usage": {
"total_tokens": 31,
"completion_tokens": 18,
"prompt_tokens": 13
},
"seed": 4944116822809979520,
"top_p": 1.0,
"temperature": 1.0,
"presence_penalty": 0.0,
"frequency_penalty": 0.0,
"system_fingerprint": "fp_50cad350e4",
"input_user": null,
"service_tier": "default",
"tools": null,
"metadata": {},
"choices": [
{
"index": 0,
"message": {
"content": "Mind of circuits hum, \nLearning patterns in silence— \nFuture's quiet spark.",
"role": "assistant",
"tool_calls": null,
"function_call": null
},
"finish_reason": "stop",
"logprobs": null
}
],
"response_format": null
}
],
"first_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"last_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2",
"has_more": false
}
ChatCompletionMessageList:
type: object
title: ChatCompletionMessageList
description: |
An object representing a list of chat completion messages.
properties:
object:
type: string
enum:
- list
default: list
description: |
The type of this object. It is always set to "list".
x-stainless-const: true
data:
type: array
description: |
An array of chat completion message objects.
items:
allOf:
- $ref: "#/components/schemas/ChatCompletionResponseMessage"
- type: object
required:
- id
properties:
id:
type: string
description: The identifier of the chat message.
first_id:
type: string
description: The identifier of the first chat message in the data array.
last_id:
type: string
description: The identifier of the last chat message in the data array.
has_more:
type: boolean
description: Indicates whether there are more chat messages available.
required:
- object
- data
- first_id
- last_id
- has_more
x-oaiMeta:
name: The chat completion message list object
group: chat
example: |
{
"object": "list",
"data": [
{
"id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2-0",
"role": "user",
"content": "write a haiku about ai",
"name": null,
"content_parts": null
}
],
"first_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2-0",
"last_id": "chatcmpl-AyPNinnUqUDYo9SAdA52NobMflmj2-0",
"has_more": false
}
ChatCompletionMessageToolCall:
type: object
properties:
id:
type: string
description: The ID of the tool call.
type:
type: string
enum:
- function
description: The type of the tool. Currently, only `function` is supported.
x-stainless-const: true
function:
type: object
description: The function that the model called.
properties:
name:
type: string
description: The name of the function to call.
arguments:
type: string
description: The arguments to call the function with, as generated by the model
in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your
function schema. Validate the arguments in your code before
calling your function.
required:
- name
- arguments
required:
- id
- type
- function
ChatCompletionMessageToolCallChunk:
type: object
properties:
index:
type: integer
id:
type: string
description: The ID of the tool call.
type:
type: string
enum:
- function
description: The type of the tool. Currently, only `function` is supported.
x-stainless-const: true
function:
type: object
properties:
name:
type: string
description: The name of the function to call.
arguments:
type: string
description: The arguments to call the function with, as generated by the model
in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your
function schema. Validate the arguments in your code before
calling your function.
required:
- index
ChatCompletionMessageToolCalls:
type: array
description: The tool calls generated by the model, such as function calls.
items:
$ref: "#/components/schemas/ChatCompletionMessageToolCall"
ChatCompletionModalities:
type: array
nullable: true
description: >
Output types that you would like the model to generate for this request.
Most models are capable of generating text, which is the default:
`["text"]`
The `gpt-4o-audio-preview` model can also be used to [generate
audio](/docs/guides/audio). To
request that this model generate both text and audio responses, you can
use:
`["text", "audio"]`
items:
type: string
enum:
- text
- audio
ChatCompletionNamedToolChoice:
type: object
description: Specifies a tool the model should use. Use to force the model to
call a specific function.
properties:
type:
type: string
enum:
- function
description: The type of the tool. Currently, only `function` is supported.
x-stainless-const: true
function:
type: object
properties:
name:
type: string
description: The name of the function to call.
required:
- name
required:
- type
- function
ChatCompletionRequestAssistantMessage:
type: object
title: Assistant message
description: |
Messages sent by the model in response to user messages.
properties:
content:
nullable: true
oneOf:
- type: string
description: The contents of the assistant message.
title: Text content
- type: array
description: An array of content parts with a defined type. Can be one or more
of type `text`, or exactly one of type `refusal`.
title: Array of content parts
items:
$ref: "#/components/schemas/ChatCompletionRequestAssistantMessageContentPart"
minItems: 1
description: >
The contents of the assistant message. Required unless `tool_calls`
or `function_call` is specified.
refusal:
nullable: true
type: string
description: The refusal message by the assistant.
role:
type: string
enum:
- assistant
description: The role of the messages author, in this case `assistant`.
x-stainless-const: true
name:
type: string
description: An optional name for the participant. Provides the model
information to differentiate between participants of the same role.
audio:
type: object
nullable: true
description: |
Data about a previous audio response from the model.
[Learn more](/docs/guides/audio).
required:
- id
properties:
id:
type: string
description: |
Unique identifier for a previous audio response from the model.
tool_calls:
$ref: "#/components/schemas/ChatCompletionMessageToolCalls"
function_call:
type: object
deprecated: true
description: Deprecated and replaced by `tool_calls`. The name and arguments of
a function that should be called, as generated by the model.
nullable: true
properties:
arguments:
type: string
description: The arguments to call the function with, as generated by the model
in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your
function schema. Validate the arguments in your code before
calling your function.
name:
type: string
description: The name of the function to call.
required:
- arguments
- name
required:
- role
ChatCompletionRequestAssistantMessageContentPart:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartText"
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartRefusal"
ChatCompletionRequestDeveloperMessage:
type: object
title: Developer message
description: >
Developer-provided instructions that the model should follow, regardless
of
messages sent by the user. With o1 models and newer, `developer`
messages
replace the previous `system` messages.
properties:
content:
description: The contents of the developer message.
oneOf:
- type: string
description: The contents of the developer message.
title: Text content
- type: array
description: An array of content parts with a defined type. For developer
messages, only type `text` is supported.
title: Array of content parts
items:
$ref: "#/components/schemas/ChatCompletionRequestMessageContentPartText"
minItems: 1
role:
type: string
enum:
- developer
description: The role of the messages author, in this case `developer`.
x-stainless-const: true
name:
type: string
description: An optional name for the participant. Provides the model
information to differentiate between participants of the same role.
required:
- content
- role
ChatCompletionRequestFunctionMessage:
type: object
title: Function message
deprecated: true
properties:
role:
type: string
enum:
- function
description: The role of the messages author, in this case `function`.
x-stainless-const: true
content:
nullable: true
type: string
description: The contents of the function message.
name:
type: string
description: The name of the function to call.
required:
- role
- content
- name
ChatCompletionRequestMessage:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestDeveloperMessage"
- $ref: "#/components/schemas/ChatCompletionRequestSystemMessage"
- $ref: "#/components/schemas/ChatCompletionRequestUserMessage"
- $ref: "#/components/schemas/ChatCompletionRequestAssistantMessage"
- $ref: "#/components/schemas/ChatCompletionRequestToolMessage"
- $ref: "#/components/schemas/ChatCompletionRequestFunctionMessage"
ChatCompletionRequestMessageContentPartAudio:
type: object
title: Audio content part
description: |
Learn about [audio inputs](/docs/guides/audio).
properties:
type:
type: string
enum:
- input_audio
description: The type of the content part. Always `input_audio`.
x-stainless-const: true
input_audio:
type: object
properties:
data:
type: string
description: Base64 encoded audio data.
format:
type: string
enum:
- wav
- mp3
description: >
The format of the encoded audio data. Currently supports "wav"
and "mp3".
required:
- data
- format
required:
- type
- input_audio
ChatCompletionRequestMessageContentPartFile:
type: object
title: File content part
description: |
Learn about [file inputs](/docs/guides/text) for text generation.
properties:
type:
type: string
enum:
- file
description: The type of the content part. Always `file`.
x-stainless-const: true
file:
type: object
properties:
filename:
type: string
description: >
The name of the file, used when passing the file to the model as
a
string.
file_data:
type: string
description: >
The base64 encoded file data, used when passing the file to the
model
as a string.
file_id:
type: string
description: |
The ID of an uploaded file to use as input.
required:
- type
- file
ChatCompletionRequestMessageContentPartImage:
type: object
title: Image content part
description: |
Learn about [image inputs](/docs/guides/vision).
properties:
type:
type: string
enum:
- image_url
description: The type of the content part.
x-stainless-const: true
image_url:
type: object
properties:
url:
type: string
description: Either a URL of the image or the base64 encoded image data.
format: uri
detail:
type: string
description: Specifies the detail level of the image. Learn more in the [Vision
guide](/docs/guides/vision#low-or-high-fidelity-image-understanding).
enum:
- auto
- low
- high
required:
- url
required:
- type
- image_url
ChatCompletionRequestMessageContentPartRefusal:
type: object
title: Refusal content part
properties:
type:
type: string
enum:
- refusal
description: The type of the content part.
x-stainless-const: true
refusal:
type: string
description: The refusal message generated by the model.
required:
- type
- refusal
ChatCompletionRequestMessageContentPartText:
type: object
title: Text content part
description: |
Learn about [text inputs](/docs/guides/text-generation).
properties:
type:
type: string
enum:
- text
description: The type of the content part.
x-stainless-const: true
text:
type: string
description: The text content.
required:
- type
- text
ChatCompletionRequestSystemMessage:
type: object
title: System message
description: >
Developer-provided instructions that the model should follow, regardless
of
messages sent by the user. With o1 models and newer, use `developer`
messages
for this purpose instead.
properties:
content:
description: The contents of the system message.
oneOf:
- type: string
description: The contents of the system message.
title: Text content
- type: array
description: An array of content parts with a defined type. For system messages,
only type `text` is supported.
title: Array of content parts
items:
$ref: "#/components/schemas/ChatCompletionRequestSystemMessageContentPart"
minItems: 1
role:
type: string
enum:
- system
description: The role of the messages author, in this case `system`.
x-stainless-const: true
name:
type: string
description: An optional name for the participant. Provides the model
information to differentiate between participants of the same role.
required:
- content
- role
ChatCompletionRequestSystemMessageContentPart:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartText"
ChatCompletionRequestToolMessage:
type: object
title: Tool message
properties:
role:
type: string
enum:
- tool
description: The role of the messages author, in this case `tool`.
x-stainless-const: true
content:
oneOf:
- type: string
description: The contents of the tool message.
title: Text content
- type: array
description: An array of content parts with a defined type. For tool messages,
only type `text` is supported.
title: Array of content parts
items:
$ref: "#/components/schemas/ChatCompletionRequestToolMessageContentPart"
minItems: 1
description: The contents of the tool message.
tool_call_id:
type: string
description: Tool call that this message is responding to.
required:
- role
- content
- tool_call_id
ChatCompletionRequestToolMessageContentPart:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartText"
ChatCompletionRequestUserMessage:
type: object
title: User message
description: |
Messages sent by an end user, containing prompts or additional context
information.
properties:
content:
description: |
The contents of the user message.
oneOf:
- type: string
description: The text contents of the message.
title: Text content
- type: array
description: An array of content parts with a defined type. Supported options
differ based on the [model](/docs/models) being used to generate
the response. Can contain text, image, or audio inputs.
title: Array of content parts
items:
$ref: "#/components/schemas/ChatCompletionRequestUserMessageContentPart"
minItems: 1
role:
type: string
enum:
- user
description: The role of the messages author, in this case `user`.
x-stainless-const: true
name:
type: string
description: An optional name for the participant. Provides the model
information to differentiate between participants of the same role.
required:
- content
- role
ChatCompletionRequestUserMessageContentPart:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartText"
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartImage"
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartAudio"
- $ref: "#/components/schemas/ChatCompletionRequestMessageContentPartFile"
ChatCompletionResponseMessage:
type: object
description: A chat completion message generated by the model.
properties:
content:
type: string
description: The contents of the message.
nullable: true
refusal:
type: string
description: The refusal message generated by the model.
nullable: true
tool_calls:
$ref: "#/components/schemas/ChatCompletionMessageToolCalls"
annotations:
type: array
description: |
Annotations for the message, when applicable, as when using the
[web search tool](/docs/guides/tools-web-search?api-mode=chat).
items:
type: object
description: |
A URL citation when using web search.
required:
- type
- url_citation
properties:
type:
type: string
description: The type of the URL citation. Always `url_citation`.
enum:
- url_citation
x-stainless-const: true
url_citation:
type: object
description: A URL citation when using web search.
required:
- end_index
- start_index
- url
- title
properties:
end_index:
type: integer
description: The index of the last character of the URL citation in the message.
start_index:
type: integer
description: The index of the first character of the URL citation in the
message.
url:
type: string
description: The URL of the web resource.
title:
type: string
description: The title of the web resource.
role:
type: string
enum:
- assistant
description: The role of the author of this message.
x-stainless-const: true
function_call:
type: object
deprecated: true
description: Deprecated and replaced by `tool_calls`. The name and arguments of
a function that should be called, as generated by the model.
properties:
arguments:
type: string
description: The arguments to call the function with, as generated by the model
in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your
function schema. Validate the arguments in your code before
calling your function.
name:
type: string
description: The name of the function to call.
required:
- name
- arguments
audio:
type: object
nullable: true
description: >
If the audio output modality is requested, this object contains data
about the audio response from the model. [Learn
more](/docs/guides/audio).
required:
- id
- expires_at
- data
- transcript
properties:
id:
type: string
description: Unique identifier for this audio response.
expires_at:
type: integer
description: >
The Unix timestamp (in seconds) for when this audio response
will
no longer be accessible on the server for use in multi-turn
conversations.
data:
type: string
description: |
Base64 encoded audio bytes generated by the model, in the format
specified in the request.
transcript:
type: string
description: Transcript of the audio generated by the model.
required:
- role
- content
- refusal
ChatCompletionRole:
type: string
description: The role of the author of a message
enum:
- developer
- system
- user
- assistant
- tool
- function
ChatCompletionStreamOptions:
description: >
Options for streaming response. Only set this when you set `stream:
true`.
type: object
nullable: true
default: null
properties:
include_usage:
type: boolean
description: >
If set, an additional chunk will be streamed before the `data:
[DONE]`
message. The `usage` field on this chunk shows the token usage
statistics
for the entire request, and the `choices` field will always be an
empty
array.
All other chunks will also include a `usage` field, but with a null
value. **NOTE:** If the stream is interrupted, you may not receive
the
final usage chunk which contains the total token usage for the
request.
ChatCompletionStreamResponseDelta:
type: object
description: A chat completion delta generated by streamed model responses.
properties:
content:
type: string
description: The contents of the chunk message.
nullable: true
function_call:
deprecated: true
type: object
description: Deprecated and replaced by `tool_calls`. The name and arguments of
a function that should be called, as generated by the model.
properties:
arguments:
type: string
description: The arguments to call the function with, as generated by the model
in JSON format. Note that the model does not always generate
valid JSON, and may hallucinate parameters not defined by your
function schema. Validate the arguments in your code before
calling your function.
name:
type: string
description: The name of the function to call.
tool_calls:
type: array
items:
$ref: "#/components/schemas/ChatCompletionMessageToolCallChunk"
role:
type: string
enum:
- developer
- system
- user
- assistant
- tool
description: The role of the author of this message.
refusal:
type: string
description: The refusal message generated by the model.
nullable: true
ChatCompletionTokenLogprob:
type: object
properties:
token:
description: The token.
type: string
logprob:
description: The log probability of this token, if it is within the top 20 most
likely tokens. Otherwise, the value `-9999.0` is used to signify
that the token is very unlikely.
type: number
bytes:
description: A list of integers representing the UTF-8 bytes representation of
the token. Useful in instances where characters are represented by
multiple tokens and their byte representations must be combined to
generate the correct text representation. Can be `null` if there is
no bytes representation for the token.
type: array
items:
type: integer
nullable: true
top_logprobs:
description: List of the most likely tokens and their log probability, at this
token position. In rare cases, there may be fewer than the number of
requested `top_logprobs` returned.
type: array
items:
type: object
properties:
token:
description: The token.
type: string
logprob:
description: The log probability of this token, if it is within the top 20 most
likely tokens. Otherwise, the value `-9999.0` is used to
signify that the token is very unlikely.
type: number
bytes:
description: A list of integers representing the UTF-8 bytes representation of
the token. Useful in instances where characters are
represented by multiple tokens and their byte representations
must be combined to generate the correct text representation.
Can be `null` if there is no bytes representation for the
token.
type: array
items:
type: integer
nullable: true
required:
- token
- logprob
- bytes
required:
- token
- logprob
- bytes
- top_logprobs
ChatCompletionTool:
type: object
properties:
type:
type: string
enum:
- function
description: The type of the tool. Currently, only `function` is supported.
x-stainless-const: true
function:
$ref: "#/components/schemas/FunctionObject"
required:
- type
- function
ChatCompletionToolChoiceOption:
description: >
Controls which (if any) tool is called by the model.
`none` means the model will not call any tool and instead generates a
message.
`auto` means the model can pick between generating a message or calling
one or more tools.
`required` means the model must call one or more tools.
Specifying a particular tool via `{"type": "function", "function":
{"name": "my_function"}}` forces the model to call that tool.
`none` is the default when no tools are present. `auto` is the default
if tools are present.
oneOf:
- type: string
description: >
`none` means the model will not call any tool and instead generates
a message. `auto` means the model can pick between generating a
message or calling one or more tools. `required` means the model
must call one or more tools.
enum:
- none
- auto
- required
- $ref: "#/components/schemas/ChatCompletionNamedToolChoice"
ChunkingStrategyRequestParam:
type: object
description: The chunking strategy used to chunk the file(s). If not set, will
use the `auto` strategy.
oneOf:
- $ref: "#/components/schemas/AutoChunkingStrategyRequestParam"
- $ref: "#/components/schemas/StaticChunkingStrategyRequestParam"
Click:
type: object
title: Click
description: |
A click action.
properties:
type:
type: string
enum:
- click
default: click
description: |
Specifies the event type. For a click action, this property is
always set to `click`.
x-stainless-const: true
button:
type: string
enum:
- left
- right
- wheel
- back
- forward
description: >
Indicates which mouse button was pressed during the click. One of
`left`, `right`, `wheel`, `back`, or `forward`.
x:
type: integer
description: |
The x-coordinate where the click occurred.
y:
type: integer
description: |
The y-coordinate where the click occurred.
required:
- type
- button
- x
- y
CodeInterpreterFileOutput:
type: object
title: Code interpreter file output
description: |
The output of a code interpreter tool call that is a file.
properties:
type:
type: string
enum:
- files
description: |
The type of the code interpreter file output. Always `files`.
x-stainless-const: true
files:
type: array
items:
type: object
properties:
mime_type:
type: string
description: |
The MIME type of the file.
file_id:
type: string
description: |
The ID of the file.
required:
- mime_type
- file_id
required:
- type
- files
CodeInterpreterTextOutput:
type: object
title: Code interpreter text output
description: |
The output of a code interpreter tool call that is text.
properties:
type:
type: string
enum:
- logs
description: |
The type of the code interpreter text output. Always `logs`.
x-stainless-const: true
logs:
type: string
description: |
The logs of the code interpreter tool call.
required:
- type
- logs
CodeInterpreterToolCall:
type: object
title: Code interpreter tool call
description: |
A tool call to run code.
properties:
id:
type: string
description: |
The unique ID of the code interpreter tool call.
type:
type: string
enum:
- code_interpreter_call
description: >
The type of the code interpreter tool call. Always
`code_interpreter_call`.
x-stainless-const: true
code:
type: string
description: |
The code to run.
status:
type: string
enum:
- in_progress
- interpreting
- completed
description: |
The status of the code interpreter tool call.
results:
type: array
items:
$ref: "#/components/schemas/CodeInterpreterToolOutput"
description: |
The results of the code interpreter tool call.
required:
- id
- type
- code
- status
- results
CodeInterpreterToolOutput:
oneOf:
- $ref: "#/components/schemas/CodeInterpreterTextOutput"
- $ref: "#/components/schemas/CodeInterpreterFileOutput"
ComparisonFilter:
type: object
additionalProperties: false
title: Comparison Filter
description: >
A filter used to compare a specified attribute key to a given value
using a defined comparison operation.
properties:
type:
type: string
default: eq
enum:
- eq
- ne
- gt
- gte
- lt
- lte
description: >
Specifies the comparison operator: `eq`, `ne`, `gt`, `gte`, `lt`,
`lte`.
- `eq`: equals
- `ne`: not equal
- `gt`: greater than
- `gte`: greater than or equal
- `lt`: less than
- `lte`: less than or equal
key:
type: string
description: The key to compare against the value.
value:
oneOf:
- type: string
- type: number
- type: boolean
description: The value to compare against the attribute key; supports string,
number, or boolean types.
required:
- type
- key
- value
x-oaiMeta:
name: ComparisonFilter
CompleteUploadRequest:
type: object
additionalProperties: false
properties:
part_ids:
type: array
description: |
The ordered list of Part IDs.
items:
type: string
md5:
description: >
The optional md5 checksum for the file contents to verify if the
bytes uploaded matches what you expect.
type: string
required:
- part_ids
CompletionUsage:
type: object
description: Usage statistics for the completion request.
properties:
completion_tokens:
type: integer
default: 0
description: Number of tokens in the generated completion.
prompt_tokens:
type: integer
default: 0
description: Number of tokens in the prompt.
total_tokens:
type: integer
default: 0
description: Total number of tokens used in the request (prompt + completion).
completion_tokens_details:
type: object
description: Breakdown of tokens used in a completion.
properties:
accepted_prediction_tokens:
type: integer
default: 0
description: |
When using Predicted Outputs, the number of tokens in the
prediction that appeared in the completion.
audio_tokens:
type: integer
default: 0
description: Audio input tokens generated by the model.
reasoning_tokens:
type: integer
default: 0
description: Tokens generated by the model for reasoning.
rejected_prediction_tokens:
type: integer
default: 0
description: >
When using Predicted Outputs, the number of tokens in the
prediction that did not appear in the completion. However, like
reasoning tokens, these tokens are still counted in the total
completion tokens for purposes of billing, output, and context
window
limits.
prompt_tokens_details:
type: object
description: Breakdown of tokens used in the prompt.
properties:
audio_tokens:
type: integer
default: 0
description: Audio input tokens present in the prompt.
cached_tokens:
type: integer
default: 0
description: Cached tokens present in the prompt.
required:
- prompt_tokens
- completion_tokens
- total_tokens
CompoundFilter:
$recursiveAnchor: true
type: object
additionalProperties: false
title: Compound Filter
description: Combine multiple filters using `and` or `or`.
properties:
type:
type: string
description: "Type of operation: `and` or `or`."
enum:
- and
- or
filters:
type: array
description: Array of filters to combine. Items can be `ComparisonFilter` or
`CompoundFilter`.
items:
oneOf:
- $ref: "#/components/schemas/ComparisonFilter"
- $recursiveRef: "#"
required:
- type
- filters
x-oaiMeta:
name: CompoundFilter
ComputerAction:
oneOf:
- $ref: "#/components/schemas/Click"
- $ref: "#/components/schemas/DoubleClick"
- $ref: "#/components/schemas/Drag"
- $ref: "#/components/schemas/KeyPress"
- $ref: "#/components/schemas/Move"
- $ref: "#/components/schemas/Screenshot"
- $ref: "#/components/schemas/Scroll"
- $ref: "#/components/schemas/ActionType"
- $ref: "#/components/schemas/Wait"
ComputerScreenshotImage:
type: object
description: |
A computer screenshot image used with the computer use tool.
properties:
type:
type: string
enum:
- computer_screenshot
default: computer_screenshot
description: >
Specifies the event type. For a computer screenshot, this property
is
always set to `computer_screenshot`.
x-stainless-const: true
image_url:
type: string
description: The URL of the screenshot image.
file_id:
type: string
description: The identifier of an uploaded file that contains the screenshot.
required:
- type
ComputerToolCall:
type: object
title: Computer tool call
description: >
A tool call to a computer use tool. See the
[computer use guide](/docs/guides/tools-computer-use) for more
information.
properties:
type:
type: string
description: The type of the computer call. Always `computer_call`.
enum:
- computer_call
default: computer_call
id:
type: string
description: The unique ID of the computer call.
call_id:
type: string
description: |
An identifier used when responding to the tool call with output.
action:
$ref: "#/components/schemas/ComputerAction"
pending_safety_checks:
type: array
items:
$ref: "#/components/schemas/ComputerToolCallSafetyCheck"
description: |
The pending safety checks for the computer call.
status:
type: string
description: |
The status of the item. One of `in_progress`, `completed`, or
`incomplete`. Populated when items are returned via API.
enum:
- in_progress
- completed
- incomplete
required:
- type
- id
- action
- call_id
- pending_safety_checks
- status
ComputerToolCallOutput:
type: object
title: Computer tool call output
description: |
The output of a computer tool call.
properties:
type:
type: string
description: >
The type of the computer tool call output. Always
`computer_call_output`.
enum:
- computer_call_output
default: computer_call_output
x-stainless-const: true
id:
type: string
description: |
The ID of the computer tool call output.
call_id:
type: string
description: |
The ID of the computer tool call that produced the output.
acknowledged_safety_checks:
type: array
description: >
The safety checks reported by the API that have been acknowledged by
the
developer.
items:
$ref: "#/components/schemas/ComputerToolCallSafetyCheck"
output:
$ref: "#/components/schemas/ComputerScreenshotImage"
status:
type: string
description: >
The status of the message input. One of `in_progress`, `completed`,
or
`incomplete`. Populated when input items are returned via API.
enum:
- in_progress
- completed
- incomplete
required:
- type
- call_id
- output
ComputerToolCallOutputResource:
allOf:
- $ref: "#/components/schemas/ComputerToolCallOutput"
- type: object
properties:
id:
type: string
description: |
The unique ID of the computer call tool output.
required:
- id
ComputerToolCallSafetyCheck:
type: object
description: |
A pending safety check for the computer call.
properties:
id:
type: string
description: The ID of the pending safety check.
code:
type: string
description: The type of the pending safety check.
message:
type: string
description: Details about the pending safety check.
required:
- id
- code
- message
Content:
description: |
Multi-modal input and output contents.
oneOf:
- title: Input content types
$ref: "#/components/schemas/InputContent"
- title: Output content types
$ref: "#/components/schemas/OutputContent"
Coordinate:
type: object
title: Coordinate
description: |
An x/y coordinate pair, e.g. `{ x: 100, y: 200 }`.
properties:
x:
type: integer
description: |
The x-coordinate.
y:
type: integer
description: |
The y-coordinate.
required:
- x
- y
CostsResult:
type: object
description: The aggregated costs details of the specific time bucket.
properties:
object:
type: string
enum:
- organization.costs.result
x-stainless-const: true
amount:
type: object
description: The monetary value in its associated currency.
properties:
value:
type: number
description: The numeric value of the cost.
currency:
type: string
description: Lowercase ISO-4217 currency e.g. "usd"
line_item:
type: string
nullable: true
description: When `group_by=line_item`, this field provides the line item of the
grouped costs result.
project_id:
type: string
nullable: true
description: When `group_by=project_id`, this field provides the project ID of
the grouped costs result.
required:
- object
x-oaiMeta:
name: Costs object
example: |
{
"object": "organization.costs.result",
"amount": {
"value": 0.06,
"currency": "usd"
},
"line_item": "Image models",
"project_id": "proj_abc"
}
CreateAssistantRequest:
type: object
additionalProperties: false
properties:
model:
description: >
ID of the model to use. You can use the [List
models](/docs/api-reference/models/list) API to see all of your
available models, or see our [Model overview](/docs/models) for
descriptions of them.
example: gpt-4o
anyOf:
- type: string
- $ref: "#/components/schemas/AssistantSupportedModels"
x-oaiTypeLabel: string
name:
description: |
The name of the assistant. The maximum length is 256 characters.
type: string
nullable: true
maxLength: 256
description:
description: >
The description of the assistant. The maximum length is 512
characters.
type: string
nullable: true
maxLength: 512
instructions:
description: >
The system instructions that the assistant uses. The maximum length
is 256,000 characters.
type: string
nullable: true
maxLength: 256000
reasoning_effort:
$ref: "#/components/schemas/ReasoningEffort"
tools:
description: >
A list of tool enabled on the assistant. There can be a maximum of
128 tools per assistant. Tools can be of types `code_interpreter`,
`file_search`, or `function`.
default: []
type: array
maxItems: 128
items:
oneOf:
- $ref: "#/components/schemas/AssistantToolsCode"
- $ref: "#/components/schemas/AssistantToolsFileSearch"
- $ref: "#/components/schemas/AssistantToolsFunction"
tool_resources:
type: object
description: >
A set of resources that are used by the assistant's tools. The
resources are specific to the type of tool. For example, the
`code_interpreter` tool requires a list of file IDs, while the
`file_search` tool requires a list of vector store IDs.
properties:
code_interpreter:
type: object
properties:
file_ids:
type: array
description: >
A list of [file](/docs/api-reference/files) IDs made
available to the `code_interpreter` tool. There can be a
maximum of 20 files associated with the tool.
default: []
maxItems: 20
items:
type: string
file_search:
type: object
properties:
vector_store_ids:
type: array
description: >
The [vector store](/docs/api-reference/vector-stores/object)
attached to this assistant. There can be a maximum of 1
vector store attached to the assistant.
maxItems: 1
items:
type: string
vector_stores:
type: array
description: >
A helper to create a [vector
store](/docs/api-reference/vector-stores/object) with
file_ids and attach it to this assistant. There can be a
maximum of 1 vector store attached to the assistant.
maxItems: 1
items:
type: object
properties:
file_ids:
type: array
description: >
A list of [file](/docs/api-reference/files) IDs to add
to the vector store. There can be a maximum of 10000
files in a vector store.
maxItems: 10000
items:
type: string
chunking_strategy:
type: object
description: The chunking strategy used to chunk the file(s). If not set, will
use the `auto` strategy.
oneOf:
- type: object
title: Auto Chunking Strategy
description: The default strategy. This strategy currently uses a
`max_chunk_size_tokens` of `800` and
`chunk_overlap_tokens` of `400`.
additionalProperties: false
properties:
type:
type: string
description: Always `auto`.
enum:
- auto
x-stainless-const: true
required:
- type
- type: object
title: Static Chunking Strategy
additionalProperties: false
properties:
type:
type: string
description: Always `static`.
enum:
- static
x-stainless-const: true
static:
type: object
additionalProperties: false
properties:
max_chunk_size_tokens:
type: integer
minimum: 100
maximum: 4096
description: The maximum number of tokens in each chunk. The default value is
`800`. The minimum value is `100` and the
maximum value is `4096`.
chunk_overlap_tokens:
type: integer
description: >
The number of tokens that overlap between
chunks. The default value is `400`.
Note that the overlap must not exceed half
of `max_chunk_size_tokens`.
required:
- max_chunk_size_tokens
- chunk_overlap_tokens
required:
- type
- static
metadata:
$ref: "#/components/schemas/Metadata"
oneOf:
- required:
- vector_store_ids
- required:
- vector_stores
nullable: true
metadata:
$ref: "#/components/schemas/Metadata"
temperature:
description: >
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
top_p:
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
description: >
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
response_format:
$ref: "#/components/schemas/AssistantsApiResponseFormatOption"
nullable: true
required:
- model
CreateChatCompletionRequest:
allOf:
- $ref: "#/components/schemas/CreateModelResponseProperties"
- type: object
properties:
messages:
description: >
A list of messages comprising the conversation so far. Depending
on the
[model](/docs/models) you use, different message types
(modalities) are
supported, like [text](/docs/guides/text-generation),
[images](/docs/guides/vision), and [audio](/docs/guides/audio).
type: array
minItems: 1
items:
$ref: "#/components/schemas/ChatCompletionRequestMessage"
model:
description: >
Model ID used to generate the response, like `gpt-4o` or `o3`.
OpenAI
offers a wide range of models with different capabilities,
performance
characteristics, and price points. Refer to the [model
guide](/docs/models)
to browse and compare available models.
$ref: "#/components/schemas/ModelIdsShared"
modalities:
$ref: "#/components/schemas/ResponseModalities"
reasoning_effort:
$ref: "#/components/schemas/ReasoningEffort"
max_completion_tokens:
description: >
An upper bound for the number of tokens that can be generated
for a completion, including visible output tokens and [reasoning
tokens](/docs/guides/reasoning).
type: integer
nullable: true
frequency_penalty:
type: number
default: 0
minimum: -2
maximum: 2
nullable: true
description: >
Number between -2.0 and 2.0. Positive values penalize new tokens
based on
their existing frequency in the text so far, decreasing the
model's
likelihood to repeat the same line verbatim.
presence_penalty:
type: number
default: 0
minimum: -2
maximum: 2
nullable: true
description: >
Number between -2.0 and 2.0. Positive values penalize new tokens
based on
whether they appear in the text so far, increasing the model's
likelihood
to talk about new topics.
web_search_options:
type: object
title: Web search
description: >
This tool searches the web for relevant results to use in a
response.
Learn more about the [web search
tool](/docs/guides/tools-web-search?api-mode=chat).
properties:
user_location:
type: object
nullable: true
required:
- type
- approximate
description: |
Approximate location parameters for the search.
properties:
type:
type: string
description: >
The type of location approximation. Always `approximate`.
enum:
- approximate
x-stainless-const: true
approximate:
$ref: "#/components/schemas/WebSearchLocation"
search_context_size:
$ref: "#/components/schemas/WebSearchContextSize"
top_logprobs:
description: >
An integer between 0 and 20 specifying the number of most likely
tokens to
return at each token position, each with an associated log
probability.
`logprobs` must be set to `true` if this parameter is used.
type: integer
minimum: 0
maximum: 20
nullable: true
response_format:
description: >
An object specifying the format that the model must output.
Setting to `{ "type": "json_schema", "json_schema": {...} }`
enables
Structured Outputs which ensures the model will match your
supplied JSON
schema. Learn more in the [Structured Outputs
guide](/docs/guides/structured-outputs).
Setting to `{ "type": "json_object" }` enables the older JSON
mode, which
ensures the message the model generates is valid JSON. Using
`json_schema`
is preferred for models that support it.
oneOf:
- $ref: "#/components/schemas/ResponseFormatText"
- $ref: "#/components/schemas/ResponseFormatJsonSchema"
- $ref: "#/components/schemas/ResponseFormatJsonObject"
audio:
type: object
nullable: true
description: >
Parameters for audio output. Required when audio output is
requested with
`modalities: ["audio"]`. [Learn more](/docs/guides/audio).
required:
- voice
- format
properties:
voice:
$ref: "#/components/schemas/VoiceIdsShared"
description: >
The voice the model uses to respond. Supported voices are
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `nova`,
`onyx`, `sage`, and `shimmer`.
format:
type: string
enum:
- wav
- aac
- mp3
- flac
- opus
- pcm16
description: >
Specifies the output audio format. Must be one of `wav`,
`mp3`, `flac`,
`opus`, or `pcm16`.
store:
type: boolean
default: false
nullable: true
description: >
Whether or not to store the output of this chat completion
request for
use in our [model distillation](/docs/guides/distillation) or
[evals](/docs/guides/evals) products.
stream:
description: |
If set to true, the model response data will be streamed to the client
as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).
See the [Streaming section below](/docs/api-reference/chat/streaming)
for more information, along with the [streaming responses](/docs/guides/streaming-responses)
guide for more information on how to handle the streaming events.
type: boolean
nullable: true
default: false
stop:
$ref: "#/components/schemas/StopConfiguration"
logit_bias:
type: object
x-oaiTypeLabel: map
default: null
nullable: true
additionalProperties:
type: integer
description: >
Modify the likelihood of specified tokens appearing in the
completion.
Accepts a JSON object that maps tokens (specified by their token
ID in the
tokenizer) to an associated bias value from -100 to 100.
Mathematically,
the bias is added to the logits generated by the model prior to
sampling.
The exact effect will vary per model, but values between -1 and
1 should
decrease or increase likelihood of selection; values like -100
or 100
should result in a ban or exclusive selection of the relevant
token.
logprobs:
description: >
Whether to return log probabilities of the output tokens or not.
If true,
returns the log probabilities of each output token returned in
the
`content` of `message`.
type: boolean
default: false
nullable: true
max_tokens:
description: >
The maximum number of [tokens](/tokenizer) that can be generated
in the
chat completion. This value can be used to control
[costs](https://openai.com/api/pricing/) for text generated via
API.
This value is now deprecated in favor of
`max_completion_tokens`, and is
not compatible with [o-series models](/docs/guides/reasoning).
type: integer
nullable: true
deprecated: true
n:
type: integer
minimum: 1
maximum: 128
default: 1
example: 1
nullable: true
description: How many chat completion choices to generate for each input
message. Note that you will be charged based on the number of
generated tokens across all of the choices. Keep `n` as `1` to
minimize costs.
prediction:
nullable: true
description: >
Configuration for a [Predicted
Output](/docs/guides/predicted-outputs),
which can greatly improve response times when large parts of the
model
response are known ahead of time. This is most common when you
are
regenerating a file with only minor changes to most of the
content.
oneOf:
- $ref: "#/components/schemas/PredictionContent"
seed:
type: integer
minimum: -9223372036854776000
maximum: 9223372036854776000
nullable: true
description: >
This feature is in Beta.
If specified, our system will make a best effort to sample
deterministically, such that repeated requests with the same
`seed` and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the
`system_fingerprint` response parameter to monitor changes in
the backend.
x-oaiMeta:
beta: true
stream_options:
$ref: "#/components/schemas/ChatCompletionStreamOptions"
tools:
type: array
description: >
A list of tools the model may call. Currently, only functions
are supported as a tool. Use this to provide a list of functions
the model may generate JSON inputs for. A max of 128 functions
are supported.
items:
$ref: "#/components/schemas/ChatCompletionTool"
tool_choice:
$ref: "#/components/schemas/ChatCompletionToolChoiceOption"
parallel_tool_calls:
$ref: "#/components/schemas/ParallelToolCalls"
function_call:
deprecated: true
description: >
Deprecated in favor of `tool_choice`.
Controls which (if any) function is called by the model.
`none` means the model will not call a function and instead
generates a
message.
`auto` means the model can pick between generating a message or
calling a
function.
Specifying a particular function via `{"name": "my_function"}`
forces the
model to call that function.
`none` is the default when no functions are present. `auto` is
the default
if functions are present.
oneOf:
- type: string
description: >
`none` means the model will not call a function and instead
generates a message. `auto` means the model can pick between
generating a message or calling a function.
enum:
- none
- auto
- $ref: "#/components/schemas/ChatCompletionFunctionCallOption"
functions:
deprecated: true
description: |
Deprecated in favor of `tools`.
A list of functions the model may generate JSON inputs for.
type: array
minItems: 1
maxItems: 128
items:
$ref: "#/components/schemas/ChatCompletionFunctions"
required:
- model
- messages
CreateChatCompletionResponse:
type: object
description: Represents a chat completion response returned by model, based on
the provided input.
properties:
id:
type: string
description: A unique identifier for the chat completion.
choices:
type: array
description: A list of chat completion choices. Can be more than one if `n` is
greater than 1.
items:
type: object
required:
- finish_reason
- index
- message
- logprobs
properties:
finish_reason:
type: string
description: >
The reason the model stopped generating tokens. This will be
`stop` if the model hit a natural stop point or a provided
stop sequence,
`length` if the maximum number of tokens specified in the
request was reached,
`content_filter` if content was omitted due to a flag from our
content filters,
`tool_calls` if the model called a tool, or `function_call`
(deprecated) if the model called a function.
enum:
- stop
- length
- tool_calls
- content_filter
- function_call
index:
type: integer
description: The index of the choice in the list of choices.
message:
$ref: "#/components/schemas/ChatCompletionResponseMessage"
logprobs:
description: Log probability information for the choice.
type: object
nullable: true
properties:
content:
description: A list of message content tokens with log probability information.
type: array
items:
$ref: "#/components/schemas/ChatCompletionTokenLogprob"
nullable: true
refusal:
description: A list of message refusal tokens with log probability information.
type: array
items:
$ref: "#/components/schemas/ChatCompletionTokenLogprob"
nullable: true
required:
- content
- refusal
created:
type: integer
description: The Unix timestamp (in seconds) of when the chat completion was
created.
model:
type: string
description: The model used for the chat completion.
service_tier:
$ref: "#/components/schemas/ServiceTier"
system_fingerprint:
type: string
description: >
This fingerprint represents the backend configuration that the model
runs with.
Can be used in conjunction with the `seed` request parameter to
understand when backend changes have been made that might impact
determinism.
object:
type: string
description: The object type, which is always `chat.completion`.
enum:
- chat.completion
x-stainless-const: true
usage:
$ref: "#/components/schemas/CompletionUsage"
required:
- choices
- created
- id
- model
- object
x-oaiMeta:
name: The chat completion object
group: chat
example: >
{
"id": "chatcmpl-B9MHDbslfkBeAs8l4bebGdFOJ6PeG",
"object": "chat.completion",
"created": 1741570283,
"model": "gpt-4o-2024-08-06",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The image shows a wooden boardwalk path running through a lush green field or meadow. The sky is bright blue with some scattered clouds, giving the scene a serene and peaceful atmosphere. Trees and shrubs are visible in the background.",
"refusal": null,
"annotations": []
},
"logprobs": null,
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 1117,
"completion_tokens": 46,
"total_tokens": 1163,
"prompt_tokens_details": {
"cached_tokens": 0,
"audio_tokens": 0
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"audio_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
"service_tier": "default",
"system_fingerprint": "fp_fc9f1d7035"
}
CreateChatCompletionStreamResponse:
type: object
description: |
Represents a streamed chunk of a chat completion response returned
by the model, based on the provided input.
[Learn more](/docs/guides/streaming-responses).
properties:
id:
type: string
description: A unique identifier for the chat completion. Each chunk has the
same ID.
choices:
type: array
description: >
A list of chat completion choices. Can contain more than one
elements if `n` is greater than 1. Can also be empty for the
last chunk if you set `stream_options: {"include_usage": true}`.
items:
type: object
required:
- delta
- finish_reason
- index
properties:
delta:
$ref: "#/components/schemas/ChatCompletionStreamResponseDelta"
logprobs:
description: Log probability information for the choice.
type: object
nullable: true
properties:
content:
description: A list of message content tokens with log probability information.
type: array
items:
$ref: "#/components/schemas/ChatCompletionTokenLogprob"
nullable: true
refusal:
description: A list of message refusal tokens with log probability information.
type: array
items:
$ref: "#/components/schemas/ChatCompletionTokenLogprob"
nullable: true
required:
- content
- refusal
finish_reason:
type: string
description: >
The reason the model stopped generating tokens. This will be
`stop` if the model hit a natural stop point or a provided
stop sequence,
`length` if the maximum number of tokens specified in the
request was reached,
`content_filter` if content was omitted due to a flag from our
content filters,
`tool_calls` if the model called a tool, or `function_call`
(deprecated) if the model called a function.
enum:
- stop
- length
- tool_calls
- content_filter
- function_call
nullable: true
index:
type: integer
description: The index of the choice in the list of choices.
created:
type: integer
description: The Unix timestamp (in seconds) of when the chat completion was
created. Each chunk has the same timestamp.
model:
type: string
description: The model to generate the completion.
service_tier:
$ref: "#/components/schemas/ServiceTier"
system_fingerprint:
type: string
description: >
This fingerprint represents the backend configuration that the model
runs with.
Can be used in conjunction with the `seed` request parameter to
understand when backend changes have been made that might impact
determinism.
object:
type: string
description: The object type, which is always `chat.completion.chunk`.
enum:
- chat.completion.chunk
x-stainless-const: true
usage:
$ref: "#/components/schemas/CompletionUsage"
nullable: true
description: >
An optional field that will only be present when you set
`stream_options: {"include_usage": true}` in your request. When
present, it
contains a null value **except for the last chunk** which contains
the
token usage statistics for the entire request.
**NOTE:** If the stream is interrupted or cancelled, you may not
receive the final usage chunk which contains the total token usage
for
the request.
required:
- choices
- created
- id
- model
- object
x-oaiMeta:
name: The chat completion chunk object
group: chat
example: |
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"role":"assistant","content":""},"logprobs":null,"finish_reason":null}]}
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{"content":"Hello"},"logprobs":null,"finish_reason":null}]}
....
{"id":"chatcmpl-123","object":"chat.completion.chunk","created":1694268190,"model":"gpt-4o-mini", "system_fingerprint": "fp_44709d6fcb", "choices":[{"index":0,"delta":{},"logprobs":null,"finish_reason":"stop"}]}
CreateCompletionRequest:
type: object
properties:
model:
description: >
ID of the model to use. You can use the [List
models](/docs/api-reference/models/list) API to see all of your
available models, or see our [Model overview](/docs/models) for
descriptions of them.
anyOf:
- type: string
- type: string
enum:
- gpt-3.5-turbo-instruct
- davinci-002
- babbage-002
x-oaiTypeLabel: string
prompt:
description: >
The prompt(s) to generate completions for, encoded as a string,
array of strings, array of tokens, or array of token arrays.
Note that "<endoftext|> is the document separator that the model
sees during training, so if a prompt is not specified the model will
generate as if from the beginning of a new document.
default: "null"
nullable: true
oneOf:
- type: string
default: ""
example: This is a test.
- type: array
items:
type: string
default: ""
example: This is a test.
- type: array
minItems: 1
items:
type: integer
example: "[1212, 318, 257, 1332, 13]"
- type: array
minItems: 1
items:
type: array
minItems: 1
items:
type: integer
example: "[[1212, 318, 257, 1332, 13]]"
best_of:
type: integer
default: 1
minimum: 0
maximum: 20
nullable: true
description: >
Generates `best_of` completions server-side and returns the "best"
(the one with the highest log probability per token). Results cannot
be streamed.
When used with `n`, `best_of` controls the number of candidate
completions and `n` specifies how many to return – `best_of` must be
greater than `n`.
**Note:** Because this parameter generates many completions, it can
quickly consume your token quota. Use carefully and ensure that you
have reasonable settings for `max_tokens` and `stop`.
echo:
type: boolean
default: false
nullable: true
description: |
Echo back the prompt in addition to the completion
frequency_penalty:
type: number
default: 0
minimum: -2
maximum: 2
nullable: true
description: >
Number between -2.0 and 2.0. Positive values penalize new tokens
based on their existing frequency in the text so far, decreasing the
model's likelihood to repeat the same line verbatim.
[See more information about frequency and presence
penalties.](/docs/guides/text-generation)
logit_bias:
type: object
x-oaiTypeLabel: map
default: null
nullable: true
additionalProperties:
type: integer
description: >
Modify the likelihood of specified tokens appearing in the
completion.
Accepts a JSON object that maps tokens (specified by their token ID
in the GPT tokenizer) to an associated bias value from -100 to 100.
You can use this [tokenizer tool](/tokenizer?view=bpe) to convert
text to token IDs. Mathematically, the bias is added to the logits
generated by the model prior to sampling. The exact effect will vary
per model, but values between -1 and 1 should decrease or increase
likelihood of selection; values like -100 or 100 should result in a
ban or exclusive selection of the relevant token.
As an example, you can pass `{"50256": -100}` to prevent the
"<endoftext|> token from being generated.
logprobs:
type: integer
minimum: 0
maximum: 5
default: null
nullable: true
description: >
Include the log probabilities on the `logprobs` most likely output
tokens, as well the chosen tokens. For example, if `logprobs` is 5,
the API will return a list of the 5 most likely tokens. The API will
always return the `logprob` of the sampled token, so there may be up
to `logprobs+1` elements in the response.
The maximum value for `logprobs` is 5.
max_tokens:
type: integer
minimum: 0
default: 16
example: 16
nullable: true
description: |
The maximum number of [tokens](/tokenizer) that can be generated in the completion.
The token count of your prompt plus `max_tokens` cannot exceed the model's context length. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens.
n:
type: integer
minimum: 1
maximum: 128
default: 1
example: 1
nullable: true
description: >
How many completions to generate for each prompt.
**Note:** Because this parameter generates many completions, it can
quickly consume your token quota. Use carefully and ensure that you
have reasonable settings for `max_tokens` and `stop`.
presence_penalty:
type: number
default: 0
minimum: -2
maximum: 2
nullable: true
description: >
Number between -2.0 and 2.0. Positive values penalize new tokens
based on whether they appear in the text so far, increasing the
model's likelihood to talk about new topics.
[See more information about frequency and presence
penalties.](/docs/guides/text-generation)
seed:
type: integer
format: int64
nullable: true
description: >
If specified, our system will make a best effort to sample
deterministically, such that repeated requests with the same `seed`
and parameters should return the same result.
Determinism is not guaranteed, and you should refer to the
`system_fingerprint` response parameter to monitor changes in the
backend.
stop:
$ref: "#/components/schemas/StopConfiguration"
stream:
description: |
Whether to stream back partial progress. If set, tokens will be sent as data-only [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format) as they become available, with the stream terminated by a `data: [DONE]` message. [Example Python code](https://cookbook.openai.com/examples/how_to_stream_completions).
type: boolean
nullable: true
default: false
stream_options:
$ref: "#/components/schemas/ChatCompletionStreamOptions"
suffix:
description: |
The suffix that comes after a completion of inserted text.
This parameter is only supported for `gpt-3.5-turbo-instruct`.
default: null
nullable: true
type: string
example: test.
temperature:
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
description: >
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.
We generally recommend altering this or `top_p` but not both.
top_p:
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
description: >
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.
We generally recommend altering this or `temperature` but not both.
user:
type: string
example: user-1234
description: >
A unique identifier representing your end-user, which can help
OpenAI to monitor and detect abuse. [Learn
more](/docs/guides/safety-best-practices#end-user-ids).
required:
- model
- prompt
CreateCompletionResponse:
type: object
description: >
Represents a completion response from the API. Note: both the streamed
and non-streamed response objects share the same shape (unlike the chat
endpoint).
properties:
id:
type: string
description: A unique identifier for the completion.
choices:
type: array
description: The list of completion choices the model generated for the input
prompt.
items:
type: object
required:
- finish_reason
- index
- logprobs
- text
properties:
finish_reason:
type: string
description: >
The reason the model stopped generating tokens. This will be
`stop` if the model hit a natural stop point or a provided
stop sequence,
`length` if the maximum number of tokens specified in the
request was reached,
or `content_filter` if content was omitted due to a flag from
our content filters.
enum:
- stop
- length
- content_filter
index:
type: integer
logprobs:
type: object
nullable: true
properties:
text_offset:
type: array
items:
type: integer
token_logprobs:
type: array
items:
type: number
tokens:
type: array
items:
type: string
top_logprobs:
type: array
items:
type: object
additionalProperties:
type: number
text:
type: string
created:
type: integer
description: The Unix timestamp (in seconds) of when the completion was created.
model:
type: string
description: The model used for completion.
system_fingerprint:
type: string
description: >
This fingerprint represents the backend configuration that the model
runs with.
Can be used in conjunction with the `seed` request parameter to
understand when backend changes have been made that might impact
determinism.
object:
type: string
description: The object type, which is always "text_completion"
enum:
- text_completion
x-stainless-const: true
usage:
$ref: "#/components/schemas/CompletionUsage"
required:
- id
- object
- created
- model
- choices
x-oaiMeta:
name: The completion object
legacy: true
example: |
{
"id": "cmpl-uqkvlQyYK7bGYrRHQ0eXlWi7",
"object": "text_completion",
"created": 1589478378,
"model": "gpt-4-turbo",
"choices": [
{
"text": "\n\nThis is indeed a test",
"index": 0,
"logprobs": null,
"finish_reason": "length"
}
],
"usage": {
"prompt_tokens": 5,
"completion_tokens": 7,
"total_tokens": 12
}
}
CreateEmbeddingRequest:
type: object
additionalProperties: false
properties:
input:
description: |
Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. The input must not exceed the max input tokens for the model (8192 tokens for `text-embedding-ada-002`), cannot be an empty string, and any array must be 2048 dimensions or less. [Example Python code](https://cookbook.openai.com/examples/how_to_count_tokens_with_tiktoken) for counting tokens. Some models may also impose a limit on total number of tokens summed across inputs.
example: The quick brown fox jumped over the lazy dog
oneOf:
- type: string
title: string
description: The string that will be turned into an embedding.
default: ""
example: This is a test.
- type: array
title: array
description: The array of strings that will be turned into an embedding.
minItems: 1
maxItems: 2048
items:
type: string
default: ""
example: "['This is a test.']"
- type: array
title: array
description: The array of integers that will be turned into an embedding.
minItems: 1
maxItems: 2048
items:
type: integer
example: "[1212, 318, 257, 1332, 13]"
- type: array
title: array
description: The array of arrays containing integers that will be turned into an
embedding.
minItems: 1
maxItems: 2048
items:
type: array
minItems: 1
items:
type: integer
example: "[[1212, 318, 257, 1332, 13]]"
model:
description: >
ID of the model to use. You can use the [List
models](/docs/api-reference/models/list) API to see all of your
available models, or see our [Model overview](/docs/models) for
descriptions of them.
example: text-embedding-3-small
anyOf:
- type: string
- type: string
enum:
- text-embedding-ada-002
- text-embedding-3-small
- text-embedding-3-large
x-oaiTypeLabel: string
encoding_format:
description: The format to return the embeddings in. Can be either `float` or
[`base64`](https://pypi.org/project/pybase64/).
example: float
default: float
type: string
enum:
- float
- base64
dimensions:
description: >
The number of dimensions the resulting output embeddings should
have. Only supported in `text-embedding-3` and later models.
type: integer
minimum: 1
user:
type: string
example: user-1234
description: >
A unique identifier representing your end-user, which can help
OpenAI to monitor and detect abuse. [Learn
more](/docs/guides/safety-best-practices#end-user-ids).
required:
- model
- input
CreateEmbeddingResponse:
type: object
properties:
data:
type: array
description: The list of embeddings generated by the model.
items:
$ref: "#/components/schemas/Embedding"
model:
type: string
description: The name of the model used to generate the embedding.
object:
type: string
description: The object type, which is always "list".
enum:
- list
x-stainless-const: true
usage:
type: object
description: The usage information for the request.
properties:
prompt_tokens:
type: integer
description: The number of tokens used by the prompt.
total_tokens:
type: integer
description: The total number of tokens used by the request.
required:
- prompt_tokens
- total_tokens
required:
- object
- model
- data
- usage
CreateEvalCompletionsRunDataSource:
type: object
title: CompletionsRunDataSource
description: >
A CompletionsRunDataSource object describing a model sampling
configuration.
properties:
type:
type: string
enum:
- completions
default: completions
description: The type of run data source. Always `completions`.
input_messages:
oneOf:
- type: object
title: TemplateInputMessages
properties:
type:
type: string
enum:
- template
description: The type of input messages. Always `template`.
template:
type: array
description: A list of chat messages forming the prompt or context. May include
variable references to the "item" namespace, ie
{{item.name}}.
items:
oneOf:
- $ref: "#/components/schemas/EasyInputMessage"
- $ref: "#/components/schemas/EvalItem"
required:
- type
- template
- type: object
title: ItemReferenceInputMessages
properties:
type:
type: string
enum:
- item_reference
description: The type of input messages. Always `item_reference`.
item_reference:
type: string
description: A reference to a variable in the "item" namespace. Ie, "item.name"
required:
- type
- item_reference
sampling_params:
type: object
properties:
temperature:
type: number
description: A higher temperature increases randomness in the outputs.
default: 1
max_completion_tokens:
type: integer
description: The maximum number of tokens in the generated output.
top_p:
type: number
description: An alternative to temperature for nucleus sampling; 1.0 includes
all tokens.
default: 1
seed:
type: integer
description: A seed value to initialize the randomness, during sampling.
default: 42
model:
type: string
description: The name of the model to use for generating completions (e.g.
"o3-mini").
source:
oneOf:
- $ref: "#/components/schemas/EvalJsonlFileContentSource"
- $ref: "#/components/schemas/EvalJsonlFileIdSource"
- $ref: "#/components/schemas/EvalStoredCompletionsSource"
required:
- type
- source
x-oaiMeta:
name: The completions data source object used to configure an individual run
group: eval runs
example: |
{
"name": "gpt-4o-mini-2024-07-18",
"data_source": {
"type": "completions",
"input_messages": {
"type": "item_reference",
"item_reference": "item.input"
},
"model": "gpt-4o-mini-2024-07-18",
"source": {
"type": "stored_completions",
"model": "gpt-4o-mini-2024-07-18"
}
}
}
CreateEvalCustomDataSourceConfig:
type: object
title: CustomDataSourceConfig
description: >
A CustomDataSourceConfig object that defines the schema for the data
source used for the evaluation runs.
This schema is used to define the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
properties:
type:
type: string
enum:
- custom
default: custom
description: The type of data source. Always `custom`.
x-stainless-const: true
item_schema:
type: object
description: The json schema for each row in the data source.
additionalProperties: true
example: |
{
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name", "age"]
}
include_sample_schema:
type: boolean
default: false
description: Whether the eval should expect you to populate the sample namespace
(ie, by generating responses off of your data source)
required:
- item_schema
- type
x-oaiMeta:
name: The eval file data source config object
group: evals
example: |
{
"type": "custom",
"item_schema": {
"type": "object",
"properties": {
"name": {"type": "string"},
"age": {"type": "integer"}
},
"required": ["name", "age"]
},
"include_sample_schema": true
}
CreateEvalItem:
title: CreateEvalItem
description: A chat message that makes up the prompt or context. May include
variable references to the "item" namespace, ie {{item.name}}.
type: object
oneOf:
- type: object
title: SimpleInputMessage
properties:
role:
type: string
description: The role of the message (e.g. "system", "assistant", "user").
content:
type: string
description: The content of the message.
required:
- role
- content
- $ref: "#/components/schemas/EvalItem"
x-oaiMeta:
name: The chat message object used to configure an individual run
CreateEvalJsonlRunDataSource:
type: object
title: JsonlRunDataSource
description: >
A JsonlRunDataSource object with that specifies a JSONL file that
matches the eval
properties:
type:
type: string
enum:
- jsonl
default: jsonl
description: The type of data source. Always `jsonl`.
x-stainless-const: true
source:
oneOf:
- $ref: "#/components/schemas/EvalJsonlFileContentSource"
- $ref: "#/components/schemas/EvalJsonlFileIdSource"
required:
- type
- source
x-oaiMeta:
name: The file data source object for the eval run configuration
group: evals
example: |
{
"type": "jsonl",
"source": {
"type": "file_id",
"id": "file-9GYS6xbkWgWhmE7VoLUWFg"
}
}
CreateEvalLabelModelGrader:
type: object
title: LabelModelGrader
description: >
A LabelModelGrader object which uses a model to assign labels to each
item
in the evaluation.
properties:
type:
description: The object type, which is always `label_model`.
type: string
enum:
- label_model
x-stainless-const: true
name:
type: string
description: The name of the grader.
model:
type: string
description: The model to use for the evaluation. Must support structured outputs.
input:
type: array
description: A list of chat messages forming the prompt or context. May include
variable references to the "item" namespace, ie {{item.name}}.
items:
$ref: "#/components/schemas/CreateEvalItem"
labels:
type: array
items:
type: string
description: The labels to classify to each item in the evaluation.
passing_labels:
type: array
items:
type: string
description: The labels that indicate a passing result. Must be a subset of
labels.
required:
- type
- model
- input
- passing_labels
- labels
- name
x-oaiMeta:
name: The eval label model grader object
group: evals
example: >
{
"type": "label_model",
"model": "gpt-4o-2024-08-06",
"input": [
{
"role": "system",
"content": "Classify the sentiment of the following statement as one of 'positive', 'neutral', or 'negative'"
},
{
"role": "user",
"content": "Statement: {{item.response}}"
}
],
"passing_labels": ["positive"],
"labels": ["positive", "neutral", "negative"],
"name": "Sentiment label grader"
}
CreateEvalLogsDataSourceConfig:
type: object
title: LogsDataSourceConfig
description: >
A data source config which specifies the metadata property of your
stored completions query.
This is usually metadata like `usecase=chatbot` or `prompt-version=v2`,
etc.
properties:
type:
type: string
enum:
- logs
default: logs
description: The type of data source. Always `logs`.
x-stainless-const: true
metadata:
type: object
description: Metadata filters for the logs data source.
additionalProperties: true
example: |
{
"use_case": "customer_support_agent"
}
required:
- type
x-oaiMeta:
name: The logs data source object for evals
group: evals
example: |
{
"type": "logs",
"metadata": {
"use_case": "customer_support_agent"
}
}
CreateEvalRequest:
type: object
title: CreateEvalRequest
properties:
name:
type: string
description: The name of the evaluation.
metadata:
$ref: "#/components/schemas/Metadata"
data_source_config:
type: object
description: The configuration for the data source used for the evaluation runs.
oneOf:
- $ref: "#/components/schemas/CreateEvalCustomDataSourceConfig"
- $ref: "#/components/schemas/CreateEvalLogsDataSourceConfig"
testing_criteria:
type: array
description: A list of graders for all eval runs in this group.
items:
oneOf:
- $ref: "#/components/schemas/CreateEvalLabelModelGrader"
- $ref: "#/components/schemas/EvalStringCheckGrader"
- $ref: "#/components/schemas/EvalTextSimilarityGrader"
- $ref: "#/components/schemas/EvalPythonGrader"
- $ref: "#/components/schemas/EvalScoreModelGrader"
required:
- data_source_config
- testing_criteria
CreateEvalResponsesRunDataSource:
type: object
title: ResponsesRunDataSource
description: >
A ResponsesRunDataSource object describing a model sampling
configuration.
properties:
type:
type: string
enum:
- completions
default: completions
description: The type of run data source. Always `completions`.
input_messages:
oneOf:
- type: object
properties:
type:
type: string
enum:
- template
description: The type of input messages. Always `template`.
template:
type: array
description: A list of chat messages forming the prompt or context. May include
variable references to the "item" namespace, ie
{{item.name}}.
items:
oneOf:
- type: object
title: ChatMessage
properties:
role:
type: string
description: The role of the message (e.g. "system", "assistant", "user").
content:
type: string
description: The content of the message.
required:
- role
- content
- $ref: "#/components/schemas/EvalItem"
required:
- type
- template
- type: object
properties:
type:
type: string
enum:
- item_reference
description: The type of input messages. Always `item_reference`.
item_reference:
type: string
description: A reference to a variable in the "item" namespace. Ie, "item.name"
required:
- type
- item_reference
sampling_params:
type: object
properties:
temperature:
type: number
description: A higher temperature increases randomness in the outputs.
default: 1
max_completion_tokens:
type: integer
description: The maximum number of tokens in the generated output.
top_p:
type: number
description: An alternative to temperature for nucleus sampling; 1.0 includes
all tokens.
default: 1
seed:
type: integer
description: A seed value to initialize the randomness, during sampling.
default: 42
model:
type: string
description: The name of the model to use for generating completions (e.g.
"o3-mini").
source:
oneOf:
- $ref: "#/components/schemas/EvalJsonlFileContentSource"
- $ref: "#/components/schemas/EvalJsonlFileIdSource"
- $ref: "#/components/schemas/EvalResponsesSource"
required:
- type
- source
x-oaiMeta:
name: The completions data source object used to configure an individual run
group: eval runs
example: |
{
"name": "gpt-4o-mini-2024-07-18",
"data_source": {
"type": "completions",
"input_messages": {
"type": "item_reference",
"item_reference": "item.input"
},
"model": "gpt-4o-mini-2024-07-18",
"source": {
"type": "stored_completions",
"model": "gpt-4o-mini-2024-07-18"
}
}
}
CreateEvalRunRequest:
type: object
title: CreateEvalRunRequest
properties:
name:
type: string
description: The name of the run.
metadata:
$ref: "#/components/schemas/Metadata"
data_source:
type: object
description: Details about the run's data source.
oneOf:
- $ref: "#/components/schemas/CreateEvalJsonlRunDataSource"
- $ref: "#/components/schemas/CreateEvalCompletionsRunDataSource"
- $ref: "#/components/schemas/CreateEvalResponsesRunDataSource"
required:
- data_source
CreateFileRequest:
type: object
additionalProperties: false
properties:
file:
description: |
The File object (not file name) to be uploaded.
type: string
format: binary
purpose:
description: >
The intended purpose of the uploaded file. One of: - `assistants`:
Used in the Assistants API - `batch`: Used in the Batch API -
`fine-tune`: Used for fine-tuning - `vision`: Images used for vision
fine-tuning - `user_data`: Flexible file type for any purpose -
`evals`: Used for eval data sets
type: string
enum:
- assistants
- batch
- fine-tune
- vision
- user_data
- evals
required:
- file
- purpose
CreateFineTuningCheckpointPermissionRequest:
type: object
additionalProperties: false
properties:
project_ids:
type: array
description: The project identifiers to grant access to.
items:
type: string
required:
- project_ids
CreateFineTuningJobRequest:
type: object
properties:
model:
description: >
The name of the model to fine-tune. You can select one of the
[supported
models](/docs/guides/fine-tuning#which-models-can-be-fine-tuned).
example: gpt-4o-mini
anyOf:
- type: string
- type: string
enum:
- babbage-002
- davinci-002
- gpt-3.5-turbo
- gpt-4o-mini
x-oaiTypeLabel: string
training_file:
description: >
The ID of an uploaded file that contains training data.
See [upload file](/docs/api-reference/files/create) for how to
upload a file.
Your dataset must be formatted as a JSONL file. Additionally, you
must upload your file with the purpose `fine-tune`.
The contents of the file should differ depending on if the model
uses the [chat](/docs/api-reference/fine-tuning/chat-input),
[completions](/docs/api-reference/fine-tuning/completions-input)
format, or if the fine-tuning method uses the
[preference](/docs/api-reference/fine-tuning/preference-input)
format.
See the [fine-tuning guide](/docs/guides/fine-tuning) for more
details.
type: string
example: file-abc123
hyperparameters:
type: object
description: >
The hyperparameters used for the fine-tuning job.
This value is now deprecated in favor of `method`, and should be
passed in under the `method` parameter.
properties:
batch_size:
description: >
Number of examples in each batch. A larger batch size means that
model parameters
are updated less frequently, but with lower variance.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 256
learning_rate_multiplier:
description: >
Scaling factor for the learning rate. A smaller learning rate
may be useful to avoid
overfitting.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: number
minimum: 0
exclusiveMinimum: true
n_epochs:
description: >
The number of epochs to train the model for. An epoch refers to
one full cycle
through the training dataset.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 50
deprecated: true
suffix:
description: >
A string of up to 64 characters that will be added to your
fine-tuned model name.
For example, a `suffix` of "custom-model-name" would produce a model
name like `ft:gpt-4o-mini:openai:custom-model-name:7p4lURel`.
type: string
minLength: 1
maxLength: 64
default: null
nullable: true
validation_file:
description: >
The ID of an uploaded file that contains validation data.
If you provide this file, the data is used to generate validation
metrics periodically during fine-tuning. These metrics can be viewed
in
the fine-tuning results file.
The same data should not be present in both train and validation
files.
Your dataset must be formatted as a JSONL file. You must upload your
file with the purpose `fine-tune`.
See the [fine-tuning guide](/docs/guides/fine-tuning) for more
details.
type: string
nullable: true
example: file-abc123
integrations:
type: array
description: A list of integrations to enable for your fine-tuning job.
nullable: true
items:
type: object
required:
- type
- wandb
properties:
type:
description: >
The type of integration to enable. Currently, only "wandb"
(Weights and Biases) is supported.
oneOf:
- type: string
enum:
- wandb
x-stainless-const: true
wandb:
type: object
description: >
The settings for your integration with Weights and Biases.
This payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit
display name for your run, add tags
to your run, and set a default entity (team, username, etc) to
be associated with your run.
required:
- project
properties:
project:
description: >
The name of the project that the new run will be created
under.
type: string
example: my-wandb-project
name:
description: >
A display name to set for the run. If not set, we will use
the Job ID as the name.
nullable: true
type: string
entity:
description: >
The entity to use for the run. This allows you to set the
team or username of the WandB user that you would
like associated with the run. If not set, the default
entity for the registered WandB API key is used.
nullable: true
type: string
tags:
description: >
A list of tags to be attached to the newly created run.
These tags are passed through directly to WandB. Some
default tags are generated by OpenAI: "openai/finetune",
"openai/{base-model}", "openai/{ftjob-abcdef}".
type: array
items:
type: string
example: custom-tag
seed:
description: >
The seed controls the reproducibility of the job. Passing in the
same seed and job parameters should produce the same results, but
may differ in rare cases.
If a seed is not specified, one will be generated for you.
type: integer
nullable: true
minimum: 0
maximum: 2147483647
example: 42
method:
$ref: "#/components/schemas/FineTuneMethod"
metadata:
$ref: "#/components/schemas/Metadata"
required:
- model
- training_file
CreateImageEditRequest:
type: object
properties:
image:
type: array
items:
type: string
format: binary
maxItems: 16
minItems: 1
description: >
The image(s) to edit. Must be a supported image file or an array of
images.
For `gpt-image-1`, each image should be a `png`, `webp`, or `jpg`
file less
than 25MB. You can provide up to 16 images.
For `dall-e-2`, you can only provide one image, and it should be a
square
`png` file less than 4MB.
prompt:
description: A text description of the desired image(s). The maximum length is
1000 characters for `dall-e-2`, and 32000 characters for
`gpt-image-1`.
type: string
example: A cute baby sea otter wearing a beret
mask:
description: An additional image whose fully transparent areas (e.g. where alpha
is zero) indicate where `image` should be edited. If there are
multiple images provided, the mask will be applied on the first
image. Must be a valid PNG file, less than 4MB, and have the same
dimensions as `image`.
type: string
format: binary
model:
anyOf:
- type: string
- type: string
enum:
- dall-e-2
- gpt-image-1
x-stainless-const: true
x-oaiTypeLabel: string
example: gpt-image-1
nullable: true
description: The model to use for image generation. Only `dall-e-2` and
`gpt-image-1` are supported. Defaults to `dall-e-2` unless a
parameter specific to `gpt-image-1` is used.
n:
type: integer
minimum: 1
maximum: 10
default: 1
example: 1
nullable: true
description: The number of images to generate. Must be between 1 and 10.
size:
type: string
enum:
- 256x256
- 512x512
- 1024x1024
- 1536x1024
- 1024x1536
- auto
default: 1024x1024
example: 1024x1024
nullable: true
description: The size of the generated images. Must be one of `1024x1024`,
`1536x1024` (landscape), `1024x1536` (portrait), or `auto` (default
value) for `gpt-image-1`, and one of `256x256`, `512x512`, or
`1024x1024` for `dall-e-2`.
response_format:
type: string
enum:
- url
- b64_json
default: url
example: url
nullable: true
description: The format in which the generated images are returned. Must be one
of `url` or `b64_json`. URLs are only valid for 60 minutes after the
image has been generated. This parameter is only supported for
`dall-e-2`, as `gpt-image-1` will always return base64-encoded
images.
user:
type: string
example: user-1234
description: >
A unique identifier representing your end-user, which can help
OpenAI to monitor and detect abuse. [Learn
more](/docs/guides/safety-best-practices#end-user-ids).
quality:
type: string
enum:
- standard
- low
- medium
- high
- auto
example: high
nullable: true
description: >
The quality of the image that will be generated. `high`, `medium`
and `low` are only supported for `gpt-image-1`. `dall-e-2` only
supports `standard` quality. Defaults to `auto`.
required:
- prompt
- image
CreateImageRequest:
type: object
properties:
prompt:
description: A text description of the desired image(s). The maximum length is
32000 characters for `gpt-image-1`, 1000 characters for `dall-e-2`
and 4000 characters for `dall-e-3`.
type: string
example: A cute baby sea otter
model:
anyOf:
- type: string
- type: string
enum:
- dall-e-2
- dall-e-3
- gpt-image-1
x-oaiTypeLabel: string
example: gpt-image-1
nullable: true
description: The model to use for image generation. One of `dall-e-2`,
`dall-e-3`, or `gpt-image-1`. Defaults to `dall-e-2` unless a
parameter specific to `gpt-image-1` is used.
n:
type: integer
minimum: 1
maximum: 10
default: 1
example: 1
nullable: true
description: The number of images to generate. Must be between 1 and 10. For
`dall-e-3`, only `n=1` is supported.
quality:
type: string
enum:
- standard
- hd
- low
- medium
- high
- auto
example: medium
nullable: true
description: >
The quality of the image that will be generated.
- `auto` (default value) will automatically select the best quality
for the given model.
- `high`, `medium` and `low` are supported for `gpt-image-1`.
- `hd` and `standard` are supported for `dall-e-3`.
- `standard` is the only option for `dall-e-2`.
response_format:
type: string
enum:
- url
- b64_json
default: url
example: url
nullable: true
description: The format in which generated images with `dall-e-2` and `dall-e-3`
are returned. Must be one of `url` or `b64_json`. URLs are only
valid for 60 minutes after the image has been generated. This
parameter isn't supported for `gpt-image-1` which will always return
base64-encoded images.
output_format:
type: string
enum:
- png
- jpeg
- webp
default: png
example: png
nullable: true
description: The format in which the generated images are returned. This
parameter is only supported for `gpt-image-1`. Must be one of `png`,
`jpeg`, or `webp`.
output_compression:
type: integer
default: 100
example: 100
nullable: true
description: The compression level (0-100%) for the generated images. This
parameter is only supported for `gpt-image-1` with the `webp` or
`jpeg` output formats, and defaults to 100.
size:
type: string
enum:
- auto
- 1024x1024
- 1536x1024
- 1024x1536
- 256x256
- 512x512
- 1792x1024
- 1024x1792
example: 1024x1024
nullable: true
description: The size of the generated images. Must be one of `1024x1024`,
`1536x1024` (landscape), `1024x1536` (portrait), or `auto` (default
value) for `gpt-image-1`, one of `256x256`, `512x512`, or
`1024x1024` for `dall-e-2`, and one of `1024x1024`, `1792x1024`, or
`1024x1792` for `dall-e-3`.
moderation:
type: string
enum:
- low
- auto
example: low
nullable: true
description: Control the content-moderation level for images generated by
`gpt-image-1`. Must be either `low` for less restrictive filtering
or `auto` (default value).
background:
type: string
enum:
- transparent
- opaque
- auto
example: transparent
nullable: true
description: >
Allows to set transparency for the background of the generated
image(s).
This parameter is only supported for `gpt-image-1`. Must be one of
`transparent`, `opaque` or `auto` (default value). When `auto` is
used, the
model will automatically determine the best background for the
image.
If `transparent`, the output format needs to support transparency,
so it
should be set to either `png` (default value) or `webp`.
style:
type: string
enum:
- vivid
- natural
default: vivid
example: vivid
nullable: true
description: The style of the generated images. This parameter is only supported
for `dall-e-3`. Must be one of `vivid` or `natural`. Vivid causes
the model to lean towards generating hyper-real and dramatic images.
Natural causes the model to produce more natural, less hyper-real
looking images.
user:
type: string
example: user-1234
description: >
A unique identifier representing your end-user, which can help
OpenAI to monitor and detect abuse. [Learn
more](/docs/guides/safety-best-practices#end-user-ids).
required:
- prompt
CreateImageVariationRequest:
type: object
properties:
image:
description: The image to use as the basis for the variation(s). Must be a valid
PNG file, less than 4MB, and square.
type: string
format: binary
model:
anyOf:
- type: string
- type: string
enum:
- dall-e-2
x-stainless-const: true
x-oaiTypeLabel: string
example: dall-e-2
nullable: true
description: The model to use for image generation. Only `dall-e-2` is supported
at this time.
n:
type: integer
minimum: 1
maximum: 10
default: 1
example: 1
nullable: true
description: The number of images to generate. Must be between 1 and 10.
response_format:
type: string
enum:
- url
- b64_json
default: url
example: url
nullable: true
description: The format in which the generated images are returned. Must be one
of `url` or `b64_json`. URLs are only valid for 60 minutes after the
image has been generated.
size:
type: string
enum:
- 256x256
- 512x512
- 1024x1024
default: 1024x1024
example: 1024x1024
nullable: true
description: The size of the generated images. Must be one of `256x256`,
`512x512`, or `1024x1024`.
user:
type: string
example: user-1234
description: >
A unique identifier representing your end-user, which can help
OpenAI to monitor and detect abuse. [Learn
more](/docs/guides/safety-best-practices#end-user-ids).
required:
- image
CreateMessageRequest:
type: object
additionalProperties: false
required:
- role
- content
properties:
role:
type: string
enum:
- user
- assistant
description: >
The role of the entity that is creating the message. Allowed values
include:
- `user`: Indicates the message is sent by an actual user and should
be used in most cases to represent user-generated messages.
- `assistant`: Indicates the message is generated by the assistant.
Use this value to insert messages from the assistant into the
conversation.
content:
oneOf:
- type: string
description: The text contents of the message.
title: Text content
- type: array
description: An array of content parts with a defined type, each can be of type
`text` or images can be passed with `image_url` or `image_file`.
Image types are only supported on [Vision-compatible
models](/docs/models).
title: Array of content parts
items:
oneOf:
- $ref: "#/components/schemas/MessageContentImageFileObject"
- $ref: "#/components/schemas/MessageContentImageUrlObject"
- $ref: "#/components/schemas/MessageRequestContentTextObject"
minItems: 1
attachments:
type: array
items:
type: object
properties:
file_id:
type: string
description: The ID of the file to attach to the message.
tools:
description: The tools to add this file to.
type: array
items:
oneOf:
- $ref: "#/components/schemas/AssistantToolsCode"
- $ref: "#/components/schemas/AssistantToolsFileSearchTypeOnly"
description: A list of files attached to the message, and the tools they should
be added to.
required:
- file_id
- tools
nullable: true
metadata:
$ref: "#/components/schemas/Metadata"
CreateModelResponseProperties:
allOf:
- $ref: "#/components/schemas/ModelResponseProperties"
CreateModerationRequest:
type: object
properties:
input:
description: >
Input (or inputs) to classify. Can be a single string, an array of
strings, or
an array of multi-modal input objects similar to other models.
oneOf:
- type: string
description: A string of text to classify for moderation.
default: ""
example: I want to kill them.
- type: array
description: An array of strings to classify for moderation.
items:
type: string
default: ""
example: I want to kill them.
- type: array
description: An array of multi-modal inputs to the moderation model.
items:
oneOf:
- type: object
description: An object describing an image to classify.
properties:
type:
description: Always `image_url`.
type: string
enum:
- image_url
x-stainless-const: true
image_url:
type: object
description: Contains either an image URL or a data URL for a base64 encoded
image.
properties:
url:
type: string
description: Either a URL of the image or the base64 encoded image data.
format: uri
example: https://example.com/image.jpg
required:
- url
required:
- type
- image_url
- type: object
description: An object describing text to classify.
properties:
type:
description: Always `text`.
type: string
enum:
- text
x-stainless-const: true
text:
description: A string of text to classify.
type: string
example: I want to kill them
required:
- type
- text
model:
description: |
The content moderation model you would like to use. Learn more in
[the moderation guide](/docs/guides/moderation), and learn about
available models [here](/docs/models#moderation).
nullable: false
example: omni-moderation-2024-09-26
anyOf:
- type: string
- type: string
enum:
- omni-moderation-latest
- omni-moderation-2024-09-26
- text-moderation-latest
- text-moderation-stable
x-oaiTypeLabel: string
required:
- input
CreateModerationResponse:
type: object
description: Represents if a given text input is potentially harmful.
properties:
id:
type: string
description: The unique identifier for the moderation request.
model:
type: string
description: The model used to generate the moderation results.
results:
type: array
description: A list of moderation objects.
items:
type: object
properties:
flagged:
type: boolean
description: Whether any of the below categories are flagged.
categories:
type: object
description: A list of the categories, and whether they are flagged or not.
properties:
hate:
type: boolean
description: Content that expresses, incites, or promotes hate based on race,
gender, ethnicity, religion, nationality, sexual
orientation, disability status, or caste. Hateful content
aimed at non-protected groups (e.g., chess players) is
harassment.
hate/threatening:
type: boolean
description: Hateful content that also includes violence or serious harm towards
the targeted group based on race, gender, ethnicity,
religion, nationality, sexual orientation, disability
status, or caste.
harassment:
type: boolean
description: Content that expresses, incites, or promotes harassing language
towards any target.
harassment/threatening:
type: boolean
description: Harassment content that also includes violence or serious harm
towards any target.
illicit:
type: boolean
nullable: true
description: Content that includes instructions or advice that facilitate the
planning or execution of wrongdoing, or that gives advice
or instruction on how to commit illicit acts. For example,
"how to shoplift" would fit this category.
illicit/violent:
type: boolean
nullable: true
description: Content that includes instructions or advice that facilitate the
planning or execution of wrongdoing that also includes
violence, or that gives advice or instruction on the
procurement of any weapon.
self-harm:
type: boolean
description: Content that promotes, encourages, or depicts acts of self-harm,
such as suicide, cutting, and eating disorders.
self-harm/intent:
type: boolean
description: Content where the speaker expresses that they are engaging or
intend to engage in acts of self-harm, such as suicide,
cutting, and eating disorders.
self-harm/instructions:
type: boolean
description: Content that encourages performing acts of self-harm, such as
suicide, cutting, and eating disorders, or that gives
instructions or advice on how to commit such acts.
sexual:
type: boolean
description: Content meant to arouse sexual excitement, such as the description
of sexual activity, or that promotes sexual services
(excluding sex education and wellness).
sexual/minors:
type: boolean
description: Sexual content that includes an individual who is under 18 years
old.
violence:
type: boolean
description: Content that depicts death, violence, or physical injury.
violence/graphic:
type: boolean
description: Content that depicts death, violence, or physical injury in graphic
detail.
required:
- hate
- hate/threatening
- harassment
- harassment/threatening
- illicit
- illicit/violent
- self-harm
- self-harm/intent
- self-harm/instructions
- sexual
- sexual/minors
- violence
- violence/graphic
category_scores:
type: object
description: A list of the categories along with their scores as predicted by
model.
properties:
hate:
type: number
description: The score for the category 'hate'.
hate/threatening:
type: number
description: The score for the category 'hate/threatening'.
harassment:
type: number
description: The score for the category 'harassment'.
harassment/threatening:
type: number
description: The score for the category 'harassment/threatening'.
illicit:
type: number
description: The score for the category 'illicit'.
illicit/violent:
type: number
description: The score for the category 'illicit/violent'.
self-harm:
type: number
description: The score for the category 'self-harm'.
self-harm/intent:
type: number
description: The score for the category 'self-harm/intent'.
self-harm/instructions:
type: number
description: The score for the category 'self-harm/instructions'.
sexual:
type: number
description: The score for the category 'sexual'.
sexual/minors:
type: number
description: The score for the category 'sexual/minors'.
violence:
type: number
description: The score for the category 'violence'.
violence/graphic:
type: number
description: The score for the category 'violence/graphic'.
required:
- hate
- hate/threatening
- harassment
- harassment/threatening
- illicit
- illicit/violent
- self-harm
- self-harm/intent
- self-harm/instructions
- sexual
- sexual/minors
- violence
- violence/graphic
category_applied_input_types:
type: object
description: A list of the categories along with the input type(s) that the
score applies to.
properties:
hate:
type: array
description: The applied input type(s) for the category 'hate'.
items:
type: string
enum:
- text
x-stainless-const: true
hate/threatening:
type: array
description: The applied input type(s) for the category 'hate/threatening'.
items:
type: string
enum:
- text
x-stainless-const: true
harassment:
type: array
description: The applied input type(s) for the category 'harassment'.
items:
type: string
enum:
- text
x-stainless-const: true
harassment/threatening:
type: array
description: The applied input type(s) for the category
'harassment/threatening'.
items:
type: string
enum:
- text
x-stainless-const: true
illicit:
type: array
description: The applied input type(s) for the category 'illicit'.
items:
type: string
enum:
- text
x-stainless-const: true
illicit/violent:
type: array
description: The applied input type(s) for the category 'illicit/violent'.
items:
type: string
enum:
- text
x-stainless-const: true
self-harm:
type: array
description: The applied input type(s) for the category 'self-harm'.
items:
type: string
enum:
- text
- image
self-harm/intent:
type: array
description: The applied input type(s) for the category 'self-harm/intent'.
items:
type: string
enum:
- text
- image
self-harm/instructions:
type: array
description: The applied input type(s) for the category
'self-harm/instructions'.
items:
type: string
enum:
- text
- image
sexual:
type: array
description: The applied input type(s) for the category 'sexual'.
items:
type: string
enum:
- text
- image
sexual/minors:
type: array
description: The applied input type(s) for the category 'sexual/minors'.
items:
type: string
enum:
- text
x-stainless-const: true
violence:
type: array
description: The applied input type(s) for the category 'violence'.
items:
type: string
enum:
- text
- image
violence/graphic:
type: array
description: The applied input type(s) for the category 'violence/graphic'.
items:
type: string
enum:
- text
- image
required:
- hate
- hate/threatening
- harassment
- harassment/threatening
- illicit
- illicit/violent
- self-harm
- self-harm/intent
- self-harm/instructions
- sexual
- sexual/minors
- violence
- violence/graphic
required:
- flagged
- categories
- category_scores
- category_applied_input_types
required:
- id
- model
- results
x-oaiMeta:
name: The moderation object
example: |
{
"id": "modr-0d9740456c391e43c445bf0f010940c7",
"model": "omni-moderation-latest",
"results": [
{
"flagged": true,
"categories": {
"harassment": true,
"harassment/threatening": true,
"sexual": false,
"hate": false,
"hate/threatening": false,
"illicit": false,
"illicit/violent": false,
"self-harm/intent": false,
"self-harm/instructions": false,
"self-harm": false,
"sexual/minors": false,
"violence": true,
"violence/graphic": true
},
"category_scores": {
"harassment": 0.8189693396524255,
"harassment/threatening": 0.804985420696006,
"sexual": 1.573112165348997e-6,
"hate": 0.007562942636942845,
"hate/threatening": 0.004208854591835476,
"illicit": 0.030535955153511665,
"illicit/violent": 0.008925306722380033,
"self-harm/intent": 0.00023023930975076432,
"self-harm/instructions": 0.0002293869201073356,
"self-harm": 0.012598046106750154,
"sexual/minors": 2.212566909570261e-8,
"violence": 0.9999992735124786,
"violence/graphic": 0.843064871157054
},
"category_applied_input_types": {
"harassment": [
"text"
],
"harassment/threatening": [
"text"
],
"sexual": [
"text",
"image"
],
"hate": [
"text"
],
"hate/threatening": [
"text"
],
"illicit": [
"text"
],
"illicit/violent": [
"text"
],
"self-harm/intent": [
"text",
"image"
],
"self-harm/instructions": [
"text",
"image"
],
"self-harm": [
"text",
"image"
],
"sexual/minors": [
"text"
],
"violence": [
"text",
"image"
],
"violence/graphic": [
"text",
"image"
]
}
}
]
}
CreateResponse:
allOf:
- $ref: "#/components/schemas/CreateModelResponseProperties"
- $ref: "#/components/schemas/ResponseProperties"
- type: object
properties:
input:
description: >
Text, image, or file inputs to the model, used to generate a
response.
Learn more:
- [Text inputs and outputs](/docs/guides/text)
- [Image inputs](/docs/guides/images)
- [File inputs](/docs/guides/pdf-files)
- [Conversation state](/docs/guides/conversation-state)
- [Function calling](/docs/guides/function-calling)
oneOf:
- type: string
title: Text input
description: >
A text input to the model, equivalent to a text input with
the
`user` role.
- type: array
title: Input item list
description: |
A list of one or many input items to the model, containing
different content types.
items:
$ref: "#/components/schemas/InputItem"
include:
type: array
description: >
Specify additional output data to include in the model response.
Currently
supported values are:
- `file_search_call.results`: Include the search results of
the file search tool call.
- `message.input_image.image_url`: Include image urls from the
input message.
- `computer_call_output.output.image_url`: Include image urls
from the computer call output.
items:
$ref: "#/components/schemas/Includable"
nullable: true
parallel_tool_calls:
type: boolean
description: |
Whether to allow the model to run tool calls in parallel.
default: true
nullable: true
store:
type: boolean
description: >
Whether to store the generated model response for later
retrieval via
API.
default: true
nullable: true
stream:
description: |
If set to true, the model response data will be streamed to the client
as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).
See the [Streaming section below](/docs/api-reference/responses-streaming)
for more information.
type: boolean
nullable: true
default: false
required:
- model
- input
CreateRunRequest:
type: object
additionalProperties: false
properties:
assistant_id:
description: The ID of the [assistant](/docs/api-reference/assistants) to use to
execute this run.
type: string
model:
description: The ID of the [Model](/docs/api-reference/models) to be used to
execute this run. If a value is provided here, it will override the
model associated with the assistant. If not, the model associated
with the assistant will be used.
example: gpt-4o
anyOf:
- type: string
- $ref: "#/components/schemas/AssistantSupportedModels"
x-oaiTypeLabel: string
nullable: true
reasoning_effort:
$ref: "#/components/schemas/ReasoningEffort"
instructions:
description: Overrides the
[instructions](/docs/api-reference/assistants/createAssistant) of
the assistant. This is useful for modifying the behavior on a
per-run basis.
type: string
nullable: true
additional_instructions:
description: Appends additional instructions at the end of the instructions for
the run. This is useful for modifying the behavior on a per-run
basis without overriding other instructions.
type: string
nullable: true
additional_messages:
description: Adds additional messages to the thread before creating the run.
type: array
items:
$ref: "#/components/schemas/CreateMessageRequest"
nullable: true
tools:
description: Override the tools the assistant can use for this run. This is
useful for modifying the behavior on a per-run basis.
nullable: true
type: array
maxItems: 20
items:
oneOf:
- $ref: "#/components/schemas/AssistantToolsCode"
- $ref: "#/components/schemas/AssistantToolsFileSearch"
- $ref: "#/components/schemas/AssistantToolsFunction"
metadata:
$ref: "#/components/schemas/Metadata"
temperature:
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
description: >
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.
top_p:
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
description: >
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
stream:
type: boolean
nullable: true
description: >
If `true`, returns a stream of events that happen during the Run as
server-sent events, terminating when the Run enters a terminal state
with a `data: [DONE]` message.
max_prompt_tokens:
type: integer
nullable: true
description: >
The maximum number of prompt tokens that may be used over the course
of the run. The run will make a best effort to use only the number
of prompt tokens specified, across multiple turns of the run. If the
run exceeds the number of prompt tokens specified, the run will end
with status `incomplete`. See `incomplete_details` for more info.
minimum: 256
max_completion_tokens:
type: integer
nullable: true
description: >
The maximum number of completion tokens that may be used over the
course of the run. The run will make a best effort to use only the
number of completion tokens specified, across multiple turns of the
run. If the run exceeds the number of completion tokens specified,
the run will end with status `incomplete`. See `incomplete_details`
for more info.
minimum: 256
truncation_strategy:
allOf:
- $ref: "#/components/schemas/TruncationObject"
- nullable: true
tool_choice:
allOf:
- $ref: "#/components/schemas/AssistantsApiToolChoiceOption"
- nullable: true
parallel_tool_calls:
$ref: "#/components/schemas/ParallelToolCalls"
response_format:
$ref: "#/components/schemas/AssistantsApiResponseFormatOption"
nullable: true
required:
- assistant_id
CreateSpeechRequest:
type: object
additionalProperties: false
properties:
model:
description: >
One of the available [TTS models](/docs/models#tts): `tts-1`,
`tts-1-hd` or `gpt-4o-mini-tts`.
anyOf:
- type: string
- type: string
enum:
- tts-1
- tts-1-hd
- gpt-4o-mini-tts
x-oaiTypeLabel: string
input:
type: string
description: The text to generate audio for. The maximum length is 4096
characters.
maxLength: 4096
instructions:
type: string
description: Control the voice of your generated audio with additional
instructions. Does not work with `tts-1` or `tts-1-hd`.
maxLength: 4096
voice:
description: The voice to use when generating the audio. Supported voices are
`alloy`, `ash`, `ballad`, `coral`, `echo`, `fable`, `onyx`, `nova`,
`sage`, `shimmer`, and `verse`. Previews of the voices are available
in the [Text to speech
guide](/docs/guides/text-to-speech#voice-options).
$ref: "#/components/schemas/VoiceIdsShared"
response_format:
description: The format to audio in. Supported formats are `mp3`, `opus`, `aac`,
`flac`, `wav`, and `pcm`.
default: mp3
type: string
enum:
- mp3
- opus
- aac
- flac
- wav
- pcm
speed:
description: The speed of the generated audio. Select a value from `0.25` to
`4.0`. `1.0` is the default.
type: number
default: 1
minimum: 0.25
maximum: 4
required:
- model
- input
- voice
CreateThreadAndRunRequest:
type: object
additionalProperties: false
properties:
assistant_id:
description: The ID of the [assistant](/docs/api-reference/assistants) to use to
execute this run.
type: string
thread:
$ref: "#/components/schemas/CreateThreadRequest"
model:
description: The ID of the [Model](/docs/api-reference/models) to be used to
execute this run. If a value is provided here, it will override the
model associated with the assistant. If not, the model associated
with the assistant will be used.
example: gpt-4o
anyOf:
- type: string
- type: string
enum:
- gpt-4.1
- gpt-4.1-mini
- gpt-4.1-nano
- gpt-4.1-2025-04-14
- gpt-4.1-mini-2025-04-14
- gpt-4.1-nano-2025-04-14
- gpt-4o
- gpt-4o-2024-11-20
- gpt-4o-2024-08-06
- gpt-4o-2024-05-13
- gpt-4o-mini
- gpt-4o-mini-2024-07-18
- gpt-4.5-preview
- gpt-4.5-preview-2025-02-27
- gpt-4-turbo
- gpt-4-turbo-2024-04-09
- gpt-4-0125-preview
- gpt-4-turbo-preview
- gpt-4-1106-preview
- gpt-4-vision-preview
- gpt-4
- gpt-4-0314
- gpt-4-0613
- gpt-4-32k
- gpt-4-32k-0314
- gpt-4-32k-0613
- gpt-3.5-turbo
- gpt-3.5-turbo-16k
- gpt-3.5-turbo-0613
- gpt-3.5-turbo-1106
- gpt-3.5-turbo-0125
- gpt-3.5-turbo-16k-0613
x-oaiTypeLabel: string
nullable: true
instructions:
description: Override the default system message of the assistant. This is
useful for modifying the behavior on a per-run basis.
type: string
nullable: true
tools:
description: Override the tools the assistant can use for this run. This is
useful for modifying the behavior on a per-run basis.
nullable: true
type: array
maxItems: 20
items:
oneOf:
- $ref: "#/components/schemas/AssistantToolsCode"
- $ref: "#/components/schemas/AssistantToolsFileSearch"
- $ref: "#/components/schemas/AssistantToolsFunction"
tool_resources:
type: object
description: >
A set of resources that are used by the assistant's tools. The
resources are specific to the type of tool. For example, the
`code_interpreter` tool requires a list of file IDs, while the
`file_search` tool requires a list of vector store IDs.
properties:
code_interpreter:
type: object
properties:
file_ids:
type: array
description: >
A list of [file](/docs/api-reference/files) IDs made
available to the `code_interpreter` tool. There can be a
maximum of 20 files associated with the tool.
default: []
maxItems: 20
items:
type: string
file_search:
type: object
properties:
vector_store_ids:
type: array
description: >
The ID of the [vector
store](/docs/api-reference/vector-stores/object) attached to
this assistant. There can be a maximum of 1 vector store
attached to the assistant.
maxItems: 1
items:
type: string
nullable: true
metadata:
$ref: "#/components/schemas/Metadata"
temperature:
type: number
minimum: 0
maximum: 2
default: 1
example: 1
nullable: true
description: >
What sampling temperature to use, between 0 and 2. Higher values
like 0.8 will make the output more random, while lower values like
0.2 will make it more focused and deterministic.
top_p:
type: number
minimum: 0
maximum: 1
default: 1
example: 1
nullable: true
description: >
An alternative to sampling with temperature, called nucleus
sampling, where the model considers the results of the tokens with
top_p probability mass. So 0.1 means only the tokens comprising the
top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
stream:
type: boolean
nullable: true
description: >
If `true`, returns a stream of events that happen during the Run as
server-sent events, terminating when the Run enters a terminal state
with a `data: [DONE]` message.
max_prompt_tokens:
type: integer
nullable: true
description: >
The maximum number of prompt tokens that may be used over the course
of the run. The run will make a best effort to use only the number
of prompt tokens specified, across multiple turns of the run. If the
run exceeds the number of prompt tokens specified, the run will end
with status `incomplete`. See `incomplete_details` for more info.
minimum: 256
max_completion_tokens:
type: integer
nullable: true
description: >
The maximum number of completion tokens that may be used over the
course of the run. The run will make a best effort to use only the
number of completion tokens specified, across multiple turns of the
run. If the run exceeds the number of completion tokens specified,
the run will end with status `incomplete`. See `incomplete_details`
for more info.
minimum: 256
truncation_strategy:
allOf:
- $ref: "#/components/schemas/TruncationObject"
- nullable: true
tool_choice:
allOf:
- $ref: "#/components/schemas/AssistantsApiToolChoiceOption"
- nullable: true
parallel_tool_calls:
$ref: "#/components/schemas/ParallelToolCalls"
response_format:
$ref: "#/components/schemas/AssistantsApiResponseFormatOption"
nullable: true
required:
- assistant_id
CreateThreadRequest:
type: object
description: |
Options to create a new thread. If no thread is provided when running a
request, an empty thread will be created.
additionalProperties: false
properties:
messages:
description: A list of [messages](/docs/api-reference/messages) to start the
thread with.
type: array
items:
$ref: "#/components/schemas/CreateMessageRequest"
tool_resources:
type: object
description: >
A set of resources that are made available to the assistant's tools
in this thread. The resources are specific to the type of tool. For
example, the `code_interpreter` tool requires a list of file IDs,
while the `file_search` tool requires a list of vector store IDs.
properties:
code_interpreter:
type: object
properties:
file_ids:
type: array
description: >
A list of [file](/docs/api-reference/files) IDs made
available to the `code_interpreter` tool. There can be a
maximum of 20 files associated with the tool.
default: []
maxItems: 20
items:
type: string
file_search:
type: object
properties:
vector_store_ids:
type: array
description: >
The [vector store](/docs/api-reference/vector-stores/object)
attached to this thread. There can be a maximum of 1 vector
store attached to the thread.
maxItems: 1
items:
type: string
vector_stores:
type: array
description: >
A helper to create a [vector
store](/docs/api-reference/vector-stores/object) with
file_ids and attach it to this thread. There can be a
maximum of 1 vector store attached to the thread.
maxItems: 1
items:
type: object
properties:
file_ids:
type: array
description: >
A list of [file](/docs/api-reference/files) IDs to add
to the vector store. There can be a maximum of 10000
files in a vector store.
maxItems: 10000
items:
type: string
chunking_strategy:
type: object
description: The chunking strategy used to chunk the file(s). If not set, will
use the `auto` strategy.
oneOf:
- type: object
title: Auto Chunking Strategy
description: The default strategy. This strategy currently uses a
`max_chunk_size_tokens` of `800` and
`chunk_overlap_tokens` of `400`.
additionalProperties: false
properties:
type:
type: string
description: Always `auto`.
enum:
- auto
x-stainless-const: true
required:
- type
- type: object
title: Static Chunking Strategy
additionalProperties: false
properties:
type:
type: string
description: Always `static`.
enum:
- static
x-stainless-const: true
static:
type: object
additionalProperties: false
properties:
max_chunk_size_tokens:
type: integer
minimum: 100
maximum: 4096
description: The maximum number of tokens in each chunk. The default value is
`800`. The minimum value is `100` and the
maximum value is `4096`.
chunk_overlap_tokens:
type: integer
description: >
The number of tokens that overlap between
chunks. The default value is `400`.
Note that the overlap must not exceed half
of `max_chunk_size_tokens`.
required:
- max_chunk_size_tokens
- chunk_overlap_tokens
required:
- type
- static
metadata:
$ref: "#/components/schemas/Metadata"
oneOf:
- required:
- vector_store_ids
- required:
- vector_stores
nullable: true
metadata:
$ref: "#/components/schemas/Metadata"
CreateTranscriptionRequest:
type: object
additionalProperties: false
properties:
file:
description: >
The audio file object (not file name) to transcribe, in one of these
formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
type: string
x-oaiTypeLabel: file
format: binary
model:
description: >
ID of the model to use. The options are `gpt-4o-transcribe`,
`gpt-4o-mini-transcribe`, and `whisper-1` (which is powered by our
open source Whisper V2 model).
example: gpt-4o-transcribe
anyOf:
- type: string
- type: string
enum:
- whisper-1
- gpt-4o-transcribe
- gpt-4o-mini-transcribe
x-stainless-const: true
x-oaiTypeLabel: string
language:
description: >
The language of the input audio. Supplying the input language in
[ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639-1_codes)
(e.g. `en`) format will improve accuracy and latency.
type: string
prompt:
description: >
An optional text to guide the model's style or continue a previous
audio segment. The [prompt](/docs/guides/speech-to-text#prompting)
should match the audio language.
type: string
response_format:
$ref: "#/components/schemas/AudioResponseFormat"
temperature:
description: >
The sampling temperature, between 0 and 1. Higher values like 0.8
will make the output more random, while lower values like 0.2 will
make it more focused and deterministic. If set to 0, the model will
use [log probability](https://en.wikipedia.org/wiki/Log_probability)
to automatically increase the temperature until certain thresholds
are hit.
type: number
default: 0
include[]:
description: >
Additional information to include in the transcription response.
`logprobs` will return the log probabilities of the tokens in the
response to understand the model's confidence in the transcription.
`logprobs` only works with response_format set to `json` and only
with
the models `gpt-4o-transcribe` and `gpt-4o-mini-transcribe`.
type: array
items:
$ref: "#/components/schemas/TranscriptionInclude"
timestamp_granularities[]:
description: >
The timestamp granularities to populate for this transcription.
`response_format` must be set `verbose_json` to use timestamp
granularities. Either or both of these options are supported:
`word`, or `segment`. Note: There is no additional latency for
segment timestamps, but generating word timestamps incurs additional
latency.
type: array
items:
type: string
enum:
- word
- segment
default:
- segment
stream:
description: |
If set to true, the model response data will be streamed to the client
as it is generated using [server-sent events](https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events#Event_stream_format).
See the [Streaming section of the Speech-to-Text guide](/docs/guides/speech-to-text?lang=curl#streaming-transcriptions)
for more information.
Note: Streaming is not supported for the `whisper-1` model and will be ignored.
type: boolean
nullable: true
default: false
required:
- file
- model
CreateTranscriptionResponseJson:
type: object
description: Represents a transcription response returned by model, based on the
provided input.
properties:
text:
type: string
description: The transcribed text.
logprobs:
type: array
optional: true
description: >
The log probabilities of the tokens in the transcription. Only
returned with the models `gpt-4o-transcribe` and
`gpt-4o-mini-transcribe` if `logprobs` is added to the `include`
array.
items:
type: object
properties:
token:
type: string
description: The token in the transcription.
logprob:
type: number
description: The log probability of the token.
bytes:
type: array
items:
type: number
description: The bytes of the token.
required:
- text
x-oaiMeta:
name: The transcription object (JSON)
group: audio
example: >
{
"text": "Imagine the wildest idea that you've ever had, and you're curious about how it might scale to something that's a 100, a 1,000 times bigger. This is a place where you can get to do that."
}
CreateTranscriptionResponseStreamEvent:
anyOf:
- $ref: "#/components/schemas/TranscriptTextDeltaEvent"
- $ref: "#/components/schemas/TranscriptTextDoneEvent"
discriminator:
propertyName: type
CreateTranscriptionResponseVerboseJson:
type: object
description: Represents a verbose json transcription response returned by model,
based on the provided input.
properties:
language:
type: string
description: The language of the input audio.
duration:
type: number
description: The duration of the input audio.
text:
type: string
description: The transcribed text.
words:
type: array
description: Extracted words and their corresponding timestamps.
items:
$ref: "#/components/schemas/TranscriptionWord"
segments:
type: array
description: Segments of the transcribed text and their corresponding details.
items:
$ref: "#/components/schemas/TranscriptionSegment"
required:
- language
- duration
- text
x-oaiMeta:
name: The transcription object (Verbose JSON)
group: audio
example: >
{
"task": "transcribe",
"language": "english",
"duration": 8.470000267028809,
"text": "The beach was a popular spot on a hot summer day. People were swimming in the ocean, building sandcastles, and playing beach volleyball.",
"segments": [
{
"id": 0,
"seek": 0,
"start": 0.0,
"end": 3.319999933242798,
"text": " The beach was a popular spot on a hot summer day.",
"tokens": [
50364, 440, 7534, 390, 257, 3743, 4008, 322, 257, 2368, 4266, 786, 13, 50530
],
"temperature": 0.0,
"avg_logprob": -0.2860786020755768,
"compression_ratio": 1.2363636493682861,
"no_speech_prob": 0.00985979475080967
},
...
]
}
CreateTranslationRequest:
type: object
additionalProperties: false
properties:
file:
description: >
The audio file object (not file name) translate, in one of these
formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, or webm.
type: string
x-oaiTypeLabel: file
format: binary
model:
description: >
ID of the model to use. Only `whisper-1` (which is powered by our
open source Whisper V2 model) is currently available.
example: whisper-1
anyOf:
- type: string
- type: string
enum:
- whisper-1
x-stainless-const: true
x-oaiTypeLabel: string
prompt:
description: >
An optional text to guide the model's style or continue a previous
audio segment. The [prompt](/docs/guides/speech-to-text#prompting)
should be in English.
type: string
response_format:
description: >
The format of the output, in one of these options: `json`, `text`,
`srt`, `verbose_json`, or `vtt`.
type: string
enum:
- json
- text
- srt
- verbose_json
- vtt
default: json
temperature:
description: >
The sampling temperature, between 0 and 1. Higher values like 0.8
will make the output more random, while lower values like 0.2 will
make it more focused and deterministic. If set to 0, the model will
use [log probability](https://en.wikipedia.org/wiki/Log_probability)
to automatically increase the temperature until certain thresholds
are hit.
type: number
default: 0
required:
- file
- model
CreateTranslationResponseJson:
type: object
properties:
text:
type: string
required:
- text
CreateTranslationResponseVerboseJson:
type: object
properties:
language:
type: string
description: The language of the output translation (always `english`).
duration:
type: number
description: The duration of the input audio.
text:
type: string
description: The translated text.
segments:
type: array
description: Segments of the translated text and their corresponding details.
items:
$ref: "#/components/schemas/TranscriptionSegment"
required:
- language
- duration
- text
CreateUploadRequest:
type: object
additionalProperties: false
properties:
filename:
description: |
The name of the file to upload.
type: string
purpose:
description: >
The intended purpose of the uploaded file.
See the [documentation on File
purposes](/docs/api-reference/files/create#files-create-purpose).
type: string
enum:
- assistants
- batch
- fine-tune
- vision
bytes:
description: |
The number of bytes in the file you are uploading.
type: integer
mime_type:
description: >
The MIME type of the file.
This must fall within the supported MIME types for your file
purpose. See the supported MIME types for assistants and vision.
type: string
required:
- filename
- purpose
- bytes
- mime_type
CreateVectorStoreFileBatchRequest:
type: object
additionalProperties: false
properties:
file_ids:
description: A list of [File](/docs/api-reference/files) IDs that the vector
store should use. Useful for tools like `file_search` that can
access files.
type: array
minItems: 1
maxItems: 500
items:
type: string
chunking_strategy:
$ref: "#/components/schemas/ChunkingStrategyRequestParam"
attributes:
$ref: "#/components/schemas/VectorStoreFileAttributes"
required:
- file_ids
CreateVectorStoreFileRequest:
type: object
additionalProperties: false
properties:
file_id:
description: A [File](/docs/api-reference/files) ID that the vector store should
use. Useful for tools like `file_search` that can access files.
type: string
chunking_strategy:
$ref: "#/components/schemas/ChunkingStrategyRequestParam"
attributes:
$ref: "#/components/schemas/VectorStoreFileAttributes"
required:
- file_id
CreateVectorStoreRequest:
type: object
additionalProperties: false
properties:
file_ids:
description: A list of [File](/docs/api-reference/files) IDs that the vector
store should use. Useful for tools like `file_search` that can
access files.
type: array
maxItems: 500
items:
type: string
name:
description: The name of the vector store.
type: string
expires_after:
$ref: "#/components/schemas/VectorStoreExpirationAfter"
chunking_strategy:
type: object
description: The chunking strategy used to chunk the file(s). If not set, will
use the `auto` strategy. Only applicable if `file_ids` is non-empty.
oneOf:
- $ref: "#/components/schemas/AutoChunkingStrategyRequestParam"
- $ref: "#/components/schemas/StaticChunkingStrategyRequestParam"
metadata:
$ref: "#/components/schemas/Metadata"
DeleteAssistantResponse:
type: object
properties:
id:
type: string
deleted:
type: boolean
object:
type: string
enum:
- assistant.deleted
x-stainless-const: true
required:
- id
- object
- deleted
DeleteCertificateResponse:
type: object
properties:
object:
type: string
description: The object type, must be `certificate.deleted`.
enum:
- certificate.deleted
x-stainless-const: true
id:
type: string
description: The ID of the certificate that was deleted.
required:
- object
- id
DeleteFileResponse:
type: object
properties:
id:
type: string
object:
type: string
enum:
- file
x-stainless-const: true
deleted:
type: boolean
required:
- id
- object
- deleted
DeleteFineTuningCheckpointPermissionResponse:
type: object
properties:
id:
type: string
description: The ID of the fine-tuned model checkpoint permission that was
deleted.
object:
type: string
description: The object type, which is always "checkpoint.permission".
enum:
- checkpoint.permission
x-stainless-const: true
deleted:
type: boolean
description: Whether the fine-tuned model checkpoint permission was successfully
deleted.
required:
- id
- object
- deleted
DeleteMessageResponse:
type: object
properties:
id:
type: string
deleted:
type: boolean
object:
type: string
enum:
- thread.message.deleted
x-stainless-const: true
required:
- id
- object
- deleted
DeleteModelResponse:
type: object
properties:
id:
type: string
deleted:
type: boolean
object:
type: string
required:
- id
- object
- deleted
DeleteThreadResponse:
type: object
properties:
id:
type: string
deleted:
type: boolean
object:
type: string
enum:
- thread.deleted
x-stainless-const: true
required:
- id
- object
- deleted
DeleteVectorStoreFileResponse:
type: object
properties:
id:
type: string
deleted:
type: boolean
object:
type: string
enum:
- vector_store.file.deleted
x-stainless-const: true
required:
- id
- object
- deleted
DeleteVectorStoreResponse:
type: object
properties:
id:
type: string
deleted:
type: boolean
object:
type: string
enum:
- vector_store.deleted
x-stainless-const: true
required:
- id
- object
- deleted
DoneEvent:
type: object
properties:
event:
type: string
enum:
- done
x-stainless-const: true
data:
type: string
enum:
- "[DONE]"
x-stainless-const: true
required:
- event
- data
description: Occurs when a stream ends.
x-oaiMeta:
dataDescription: "`data` is `[DONE]`"
DoubleClick:
type: object
title: DoubleClick
description: |
A double click action.
properties:
type:
type: string
enum:
- double_click
default: double_click
description: >
Specifies the event type. For a double click action, this property
is
always set to `double_click`.
x-stainless-const: true
x:
type: integer
description: |
The x-coordinate where the double click occurred.
y:
type: integer
description: |
The y-coordinate where the double click occurred.
required:
- type
- x
- y
Drag:
type: object
title: Drag
description: |
A drag action.
properties:
type:
type: string
enum:
- drag
default: drag
description: |
Specifies the event type. For a drag action, this property is
always set to `drag`.
x-stainless-const: true
path:
type: array
description: >
An array of coordinates representing the path of the drag action.
Coordinates will appear as an array
of objects, eg
```
[
{ x: 100, y: 200 },
{ x: 200, y: 300 }
]
```
items:
title: Drag path coordinates
description: |
A series of x/y coordinate pairs in the drag path.
$ref: "#/components/schemas/Coordinate"
required:
- type
- path
EasyInputMessage:
type: object
title: Input message
description: >
A message input to the model with a role indicating instruction
following
hierarchy. Instructions given with the `developer` or `system` role take
precedence over instructions given with the `user` role. Messages with
the
`assistant` role are presumed to have been generated by the model in
previous
interactions.
properties:
role:
type: string
description: >
The role of the message input. One of `user`, `assistant`, `system`,
or
`developer`.
enum:
- user
- assistant
- system
- developer
content:
description: >
Text, image, or audio input to the model, used to generate a
response.
Can also contain previous assistant responses.
oneOf:
- type: string
title: Text input
description: |
A text input to the model.
- $ref: "#/components/schemas/InputMessageContentList"
type:
type: string
description: |
The type of the message input. Always `message`.
enum:
- message
x-stainless-const: true
required:
- role
- content
Embedding:
type: object
description: |
Represents an embedding vector returned by embedding endpoint.
properties:
index:
type: integer
description: The index of the embedding in the list of embeddings.
embedding:
type: array
description: >
The embedding vector, which is a list of floats. The length of
vector depends on the model as listed in the [embedding
guide](/docs/guides/embeddings).
items:
type: number
object:
type: string
description: The object type, which is always "embedding".
enum:
- embedding
x-stainless-const: true
required:
- index
- object
- embedding
x-oaiMeta:
name: The embedding object
example: |
{
"object": "embedding",
"embedding": [
0.0023064255,
-0.009327292,
.... (1536 floats total for ada-002)
-0.0028842222,
],
"index": 0
}
Error:
type: object
properties:
code:
type: string
nullable: true
message:
type: string
nullable: false
param:
type: string
nullable: true
type:
type: string
nullable: false
required:
- type
- message
- param
- code
ErrorEvent:
type: object
properties:
event:
type: string
enum:
- error
x-stainless-const: true
data:
$ref: "#/components/schemas/Error"
required:
- event
- data
description: Occurs when an [error](/docs/guides/error-codes#api-errors) occurs.
This can happen due to an internal server error or a timeout.
x-oaiMeta:
dataDescription: "`data` is an [error](/docs/guides/error-codes#api-errors)"
ErrorResponse:
type: object
properties:
error:
$ref: "#/components/schemas/Error"
required:
- error
Eval:
type: object
title: Eval
description: |
An Eval object with a data source config and testing criteria.
An Eval represents a task to be done for your LLM integration.
Like:
- Improve the quality of my chatbot
- See how well my chatbot handles customer support
- Check if o3-mini is better at my usecase than gpt-4o
properties:
object:
type: string
enum:
- eval
default: eval
description: The object type.
x-stainless-const: true
id:
type: string
description: Unique identifier for the evaluation.
name:
type: string
description: The name of the evaluation.
example: Chatbot effectiveness Evaluation
data_source_config:
type: object
description: Configuration of data sources used in runs of the evaluation.
oneOf:
- $ref: "#/components/schemas/EvalCustomDataSourceConfig"
- $ref: "#/components/schemas/EvalStoredCompletionsDataSourceConfig"
testing_criteria:
default: eval
description: A list of testing criteria.
type: array
items:
oneOf:
- $ref: "#/components/schemas/EvalLabelModelGrader"
- $ref: "#/components/schemas/EvalStringCheckGrader"
- $ref: "#/components/schemas/EvalTextSimilarityGrader"
- $ref: "#/components/schemas/EvalPythonGrader"
- $ref: "#/components/schemas/EvalScoreModelGrader"
created_at:
type: integer
description: The Unix timestamp (in seconds) for when the eval was created.
metadata:
$ref: "#/components/schemas/Metadata"
required:
- id
- data_source_config
- object
- testing_criteria
- name
- created_at
- metadata
x-oaiMeta:
name: The eval object
group: evals
example: |
{
"object": "eval",
"id": "eval_67abd54d9b0081909a86353f6fb9317a",
"data_source_config": {
"type": "custom",
"item_schema": {
"type": "object",
"properties": {
"label": {"type": "string"},
},
"required": ["label"]
},
"include_sample_schema": true
},
"testing_criteria": [
{
"name": "My string check grader",
"type": "string_check",
"input": "{{sample.output_text}}",
"reference": "{{item.label}}",
"operation": "eq",
}
],
"name": "External Data Eval",
"created_at": 1739314509,
"metadata": {
"test": "synthetics",
}
}
EvalApiError:
type: object
title: EvalApiError
description: |
An object representing an error response from the Eval API.
properties:
code:
type: string
description: The error code.
message:
type: string
description: The error message.
required:
- code
- message
x-oaiMeta:
name: The API error object
group: evals
example: |
{
"code": "internal_error",
"message": "The eval run failed due to an internal error."
}
EvalCustomDataSourceConfig:
type: object
title: CustomDataSourceConfig
description: >
A CustomDataSourceConfig which specifies the schema of your `item` and
optionally `sample` namespaces.
The response schema defines the shape of the data that will be:
- Used to define your testing criteria and
- What data is required when creating a run
properties:
type:
type: string
enum:
- custom
default: custom
description: The type of data source. Always `custom`.
x-stainless-const: true
schema:
type: object
description: |
The json schema for the run data source items.
Learn how to build JSON schemas [here](https://json-schema.org/).
additionalProperties: true
example: |
{
"type": "object",
"properties": {
"item": {
"type": "object",
"properties": {
"label": {"type": "string"},
},
"required": ["label"]
}
},
"required": ["item"]
}
required:
- type
- schema
x-oaiMeta:
name: The eval custom data source config object
group: evals
example: |
{
"type": "custom",
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object",
"properties": {
"label": {"type": "string"},
},
"required": ["label"]
}
},
"required": ["item"]
}
}
EvalItem:
type: object
title: Eval message object
description: >
A message input to the model with a role indicating instruction
following
hierarchy. Instructions given with the `developer` or `system` role take
precedence over instructions given with the `user` role. Messages with
the
`assistant` role are presumed to have been generated by the model in
previous
interactions.
properties:
role:
type: string
description: >
The role of the message input. One of `user`, `assistant`, `system`,
or
`developer`.
enum:
- user
- assistant
- system
- developer
content:
description: |
Text inputs to the model - can contain template strings.
oneOf:
- type: string
title: Text input
description: |
A text input to the model.
- $ref: "#/components/schemas/InputTextContent"
- type: object
title: Output text
description: |
A text output from the model.
properties:
type:
type: string
description: |
The type of the output text. Always `output_text`.
enum:
- output_text
x-stainless-const: true
text:
type: string
description: |
The text output from the model.
required:
- type
- text
type:
type: string
description: |
The type of the message input. Always `message`.
enum:
- message
x-stainless-const: true
required:
- role
- content
EvalJsonlFileContentSource:
type: object
title: EvalJsonlFileContentSource
properties:
type:
type: string
enum:
- file_content
default: file_content
description: The type of jsonl source. Always `file_content`.
x-stainless-const: true
content:
type: array
items:
type: object
properties:
item:
type: object
additionalProperties: true
sample:
type: object
additionalProperties: true
required:
- item
description: The content of the jsonl file.
required:
- type
- content
EvalJsonlFileIdSource:
type: object
title: EvalJsonlFileIdSource
properties:
type:
type: string
enum:
- file_id
default: file_id
description: The type of jsonl source. Always `file_id`.
x-stainless-const: true
id:
type: string
description: The identifier of the file.
required:
- type
- id
EvalLabelModelGrader:
type: object
title: LabelModelGrader
description: >
A LabelModelGrader object which uses a model to assign labels to each
item
in the evaluation.
properties:
type:
description: The object type, which is always `label_model`.
type: string
enum:
- label_model
x-stainless-const: true
name:
type: string
description: The name of the grader.
model:
type: string
description: The model to use for the evaluation. Must support structured outputs.
input:
type: array
items:
$ref: "#/components/schemas/EvalItem"
labels:
type: array
items:
type: string
description: The labels to assign to each item in the evaluation.
passing_labels:
type: array
items:
type: string
description: The labels that indicate a passing result. Must be a subset of
labels.
required:
- type
- model
- input
- passing_labels
- labels
- name
x-oaiMeta:
name: The eval label model grader object
group: evals
example: >
{
"name": "First label grader",
"type": "label_model",
"model": "gpt-4o-2024-08-06",
"input": [
{
"type": "message",
"role": "system",
"content": {
"type": "input_text",
"text": "Classify the sentiment of the following statement as one of positive, neutral, or negative"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "Statement: {{item.response}}"
}
}
],
"passing_labels": [
"positive"
],
"labels": [
"positive",
"neutral",
"negative"
]
}
EvalList:
type: object
title: EvalList
description: |
An object representing a list of evals.
properties:
object:
type: string
enum:
- list
default: list
description: |
The type of this object. It is always set to "list".
x-stainless-const: true
data:
type: array
description: |
An array of eval objects.
items:
$ref: "#/components/schemas/Eval"
first_id:
type: string
description: The identifier of the first eval in the data array.
last_id:
type: string
description: The identifier of the last eval in the data array.
has_more:
type: boolean
description: Indicates whether there are more evals available.
required:
- object
- data
- first_id
- last_id
- has_more
x-oaiMeta:
name: The eval list object
group: evals
example: |
{
"object": "list",
"data": [
{
"object": "eval",
"id": "eval_67abd54d9b0081909a86353f6fb9317a",
"data_source_config": {
"type": "custom",
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object",
"properties": {
"input": {
"type": "string"
},
"ground_truth": {
"type": "string"
}
},
"required": [
"input",
"ground_truth"
]
}
},
"required": [
"item"
]
}
},
"testing_criteria": [
{
"name": "String check",
"id": "String check-2eaf2d8d-d649-4335-8148-9535a7ca73c2",
"type": "string_check",
"input": "{{item.input}}",
"reference": "{{item.ground_truth}}",
"operation": "eq"
}
],
"name": "External Data Eval",
"created_at": 1739314509,
"metadata": {},
}
],
"first_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"last_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"has_more": true
}
EvalPythonGrader:
type: object
title: PythonGrader
description: |
A PythonGrader object that runs a python script on the input.
properties:
type:
type: string
enum:
- python
description: The object type, which is always `python`.
x-stainless-const: true
name:
type: string
description: The name of the grader.
source:
type: string
description: The source code of the python script.
pass_threshold:
type: number
description: The threshold for the score.
image_tag:
type: string
description: The image tag to use for the python script.
required:
- type
- name
- source
x-oaiMeta:
name: The eval python grader object
group: evals
example: |
{
"type": "string_check",
"name": "Example string check grader",
"input": "{{sample.output_text}}",
"reference": "{{item.label}}",
"operation": "eq"
}
EvalResponsesSource:
type: object
title: EvalResponsesSource
description: |
A EvalResponsesSource object describing a run data source configuration.
properties:
type:
type: string
enum:
- responses
description: The type of run data source. Always `responses`.
metadata:
type: object
nullable: true
description: Metadata filter for the responses. This is a query parameter used
to select responses.
model:
type: string
nullable: true
description: The name of the model to find responses for. This is a query
parameter used to select responses.
instructions_search:
type: string
nullable: true
description: Optional search string for instructions. This is a query parameter
used to select responses.
created_after:
type: integer
minimum: 0
nullable: true
description: Only include items created after this timestamp (inclusive). This
is a query parameter used to select responses.
created_before:
type: integer
minimum: 0
nullable: true
description: Only include items created before this timestamp (inclusive). This
is a query parameter used to select responses.
has_tool_calls:
type: boolean
nullable: true
description: Whether the response has tool calls. This is a query parameter used
to select responses.
reasoning_effort:
$ref: "#/components/schemas/ReasoningEffort"
nullable: true
description: Optional reasoning effort parameter. This is a query parameter used
to select responses.
temperature:
type: number
nullable: true
description: Sampling temperature. This is a query parameter used to select
responses.
top_p:
type: number
nullable: true
description: Nucleus sampling parameter. This is a query parameter used to
select responses.
users:
type: array
items:
type: string
nullable: true
description: List of user identifiers. This is a query parameter used to select
responses.
allow_parallel_tool_calls:
type: boolean
nullable: true
description: Whether to allow parallel tool calls. This is a query parameter
used to select responses.
required:
- type
x-oaiMeta:
name: The run data source object used to configure an individual run
group: eval runs
example: |
{
"type": "responses",
"model": "gpt-4o-mini-2024-07-18",
"temperature": 0.7,
"top_p": 1.0,
"users": ["user1", "user2"],
"allow_parallel_tool_calls": true
}
EvalRun:
type: object
title: EvalRun
description: |
A schema representing an evaluation run.
properties:
object:
type: string
enum:
- eval.run
default: eval.run
description: The type of the object. Always "eval.run".
x-stainless-const: true
id:
type: string
description: Unique identifier for the evaluation run.
eval_id:
type: string
description: The identifier of the associated evaluation.
status:
type: string
description: The status of the evaluation run.
model:
type: string
description: The model that is evaluated, if applicable.
name:
type: string
description: The name of the evaluation run.
created_at:
type: integer
description: Unix timestamp (in seconds) when the evaluation run was created.
report_url:
type: string
description: The URL to the rendered evaluation run report on the UI dashboard.
result_counts:
type: object
description: Counters summarizing the outcomes of the evaluation run.
properties:
total:
type: integer
description: Total number of executed output items.
errored:
type: integer
description: Number of output items that resulted in an error.
failed:
type: integer
description: Number of output items that failed to pass the evaluation.
passed:
type: integer
description: Number of output items that passed the evaluation.
required:
- total
- errored
- failed
- passed
per_model_usage:
type: array
description: Usage statistics for each model during the evaluation run.
items:
type: object
properties:
model_name:
type: string
description: The name of the model.
invocation_count:
type: integer
description: The number of invocations.
prompt_tokens:
type: integer
description: The number of prompt tokens used.
completion_tokens:
type: integer
description: The number of completion tokens generated.
total_tokens:
type: integer
description: The total number of tokens used.
cached_tokens:
type: integer
description: The number of tokens retrieved from cache.
required:
- model_name
- invocation_count
- prompt_tokens
- completion_tokens
- total_tokens
- cached_tokens
per_testing_criteria_results:
type: array
description: Results per testing criteria applied during the evaluation run.
items:
type: object
properties:
testing_criteria:
type: string
description: A description of the testing criteria.
passed:
type: integer
description: Number of tests passed for this criteria.
failed:
type: integer
description: Number of tests failed for this criteria.
required:
- testing_criteria
- passed
- failed
data_source:
type: object
description: Information about the run's data source.
oneOf:
- $ref: "#/components/schemas/CreateEvalJsonlRunDataSource"
- $ref: "#/components/schemas/CreateEvalCompletionsRunDataSource"
- $ref: "#/components/schemas/CreateEvalResponsesRunDataSource"
metadata:
$ref: "#/components/schemas/Metadata"
error:
$ref: "#/components/schemas/EvalApiError"
required:
- object
- id
- eval_id
- status
- model
- name
- created_at
- report_url
- result_counts
- per_model_usage
- per_testing_criteria_results
- data_source
- metadata
- error
x-oaiMeta:
name: The eval run object
group: evals
example: >
{
"object": "eval.run",
"id": "evalrun_67e57965b480819094274e3a32235e4c",
"eval_id": "eval_67e579652b548190aaa83ada4b125f47",
"report_url": "https://platform.openai.com/evaluations/eval_67e579652b548190aaa83ada4b125f47?run_id=evalrun_67e57965b480819094274e3a32235e4c",
"status": "queued",
"model": "gpt-4o-mini",
"name": "gpt-4o-mini",
"created_at": 1743092069,
"result_counts": {
"total": 0,
"errored": 0,
"failed": 0,
"passed": 0
},
"per_model_usage": null,
"per_testing_criteria_results": null,
"data_source": {
"type": "completions",
"source": {
"type": "file_content",
"content": [
{
"item": {
"input": "Tech Company Launches Advanced Artificial Intelligence Platform",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "Central Bank Increases Interest Rates Amid Inflation Concerns",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "International Summit Addresses Climate Change Strategies",
"ground_truth": "World"
}
},
{
"item": {
"input": "Major Retailer Reports Record-Breaking Holiday Sales",
"ground_truth": "Business"
}
},
{
"item": {
"input": "National Team Qualifies for World Championship Finals",
"ground_truth": "Sports"
}
},
{
"item": {
"input": "Stock Markets Rally After Positive Economic Data Released",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "Global Manufacturer Announces Merger with Competitor",
"ground_truth": "Business"
}
},
{
"item": {
"input": "Breakthrough in Renewable Energy Technology Unveiled",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "World Leaders Sign Historic Climate Agreement",
"ground_truth": "World"
}
},
{
"item": {
"input": "Professional Athlete Sets New Record in Championship Event",
"ground_truth": "Sports"
}
},
{
"item": {
"input": "Financial Institutions Adapt to New Regulatory Requirements",
"ground_truth": "Business"
}
},
{
"item": {
"input": "Tech Conference Showcases Advances in Artificial Intelligence",
"ground_truth": "Technology"
}
},
{
"item": {
"input": "Global Markets Respond to Oil Price Fluctuations",
"ground_truth": "Markets"
}
},
{
"item": {
"input": "International Cooperation Strengthened Through New Treaty",
"ground_truth": "World"
}
},
{
"item": {
"input": "Sports League Announces Revised Schedule for Upcoming Season",
"ground_truth": "Sports"
}
}
]
},
"input_messages": {
"type": "template",
"template": [
{
"type": "message",
"role": "developer",
"content": {
"type": "input_text",
"text": "Categorize a given news headline into one of the following topics: Technology, Markets, World, Business, or Sports.\n\n# Steps\n\n1. Analyze the content of the news headline to understand its primary focus.\n2. Extract the subject matter, identifying any key indicators or keywords.\n3. Use the identified indicators to determine the most suitable category out of the five options: Technology, Markets, World, Business, or Sports.\n4. Ensure only one category is selected per headline.\n\n# Output Format\n\nRespond with the chosen category as a single word. For instance: \"Technology\", \"Markets\", \"World\", \"Business\", or \"Sports\".\n\n# Examples\n\n**Input**: \"Apple Unveils New iPhone Model, Featuring Advanced AI Features\" \n**Output**: \"Technology\"\n\n**Input**: \"Global Stocks Mixed as Investors Await Central Bank Decisions\" \n**Output**: \"Markets\"\n\n**Input**: \"War in Ukraine: Latest Updates on Negotiation Status\" \n**Output**: \"World\"\n\n**Input**: \"Microsoft in Talks to Acquire Gaming Company for $2 Billion\" \n**Output**: \"Business\"\n\n**Input**: \"Manchester United Secures Win in Premier League Football Match\" \n**Output**: \"Sports\" \n\n# Notes\n\n- If the headline appears to fit into more than one category, choose the most dominant theme.\n- Keywords or phrases such as \"stocks\", \"company acquisition\", \"match\", or technological brands can be good indicators for classification.\n"
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "{{item.input}}"
}
}
]
},
"model": "gpt-4o-mini",
"sampling_params": {
"seed": 42,
"temperature": 1.0,
"top_p": 1.0,
"max_completions_tokens": 2048
}
},
"error": null,
"metadata": {}
}
EvalRunList:
type: object
title: EvalRunList
description: |
An object representing a list of runs for an evaluation.
properties:
object:
type: string
enum:
- list
default: list
description: |
The type of this object. It is always set to "list".
x-stainless-const: true
data:
type: array
description: |
An array of eval run objects.
items:
$ref: "#/components/schemas/EvalRun"
first_id:
type: string
description: The identifier of the first eval run in the data array.
last_id:
type: string
description: The identifier of the last eval run in the data array.
has_more:
type: boolean
description: Indicates whether there are more evals available.
required:
- object
- data
- first_id
- last_id
- has_more
x-oaiMeta:
name: The eval run list object
group: evals
example: >
{
"object": "list",
"data": [
{
"object": "eval.run",
"id": "evalrun_67b7fbdad46c819092f6fe7a14189620",
"eval_id": "eval_67b7fa9a81a88190ab4aa417e397ea21",
"report_url": "https://platform.openai.com/evaluations/eval_67b7fa9a81a88190ab4aa417e397ea21?run_id=evalrun_67b7fbdad46c819092f6fe7a14189620",
"status": "completed",
"model": "o3-mini",
"name": "Academic Assistant",
"created_at": 1740110812,
"result_counts": {
"total": 171,
"errored": 0,
"failed": 80,
"passed": 91
},
"per_model_usage": null,
"per_testing_criteria_results": [
{
"testing_criteria": "String check grader",
"passed": 91,
"failed": 80
}
],
"run_data_source": {
"type": "completions",
"template_messages": [
{
"type": "message",
"role": "system",
"content": {
"type": "input_text",
"text": "You are a helpful assistant."
}
},
{
"type": "message",
"role": "user",
"content": {
"type": "input_text",
"text": "Hello, can you help me with my homework?"
}
}
],
"datasource_reference": null,
"model": "o3-mini",
"max_completion_tokens": null,
"seed": null,
"temperature": null,
"top_p": null
},
"error": null,
"metadata": {"test": "synthetics"}
}
],
"first_id": "evalrun_67abd54d60ec8190832b46859da808f7",
"last_id": "evalrun_67abd54d60ec8190832b46859da808f7",
"has_more": false
}
EvalRunOutputItem:
type: object
title: EvalRunOutputItem
description: |
A schema representing an evaluation run output item.
properties:
object:
type: string
enum:
- eval.run.output_item
default: eval.run.output_item
description: The type of the object. Always "eval.run.output_item".
x-stainless-const: true
id:
type: string
description: Unique identifier for the evaluation run output item.
run_id:
type: string
description: The identifier of the evaluation run associated with this output
item.
eval_id:
type: string
description: The identifier of the evaluation group.
created_at:
type: integer
description: Unix timestamp (in seconds) when the evaluation run was created.
status:
type: string
description: The status of the evaluation run.
datasource_item_id:
type: integer
description: The identifier for the data source item.
datasource_item:
type: object
description: Details of the input data source item.
additionalProperties: true
results:
type: array
description: A list of results from the evaluation run.
items:
type: object
description: A result object.
additionalProperties: true
sample:
type: object
description: A sample containing the input and output of the evaluation run.
properties:
input:
type: array
description: An array of input messages.
items:
type: object
description: An input message.
properties:
role:
type: string
description: The role of the message sender (e.g., system, user, developer).
content:
type: string
description: The content of the message.
required:
- role
- content
output:
type: array
description: An array of output messages.
items:
type: object
properties:
role:
type: string
description: The role of the message (e.g. "system", "assistant", "user").
content:
type: string
description: The content of the message.
finish_reason:
type: string
description: The reason why the sample generation was finished.
model:
type: string
description: The model used for generating the sample.
usage:
type: object
description: Token usage details for the sample.
properties:
total_tokens:
type: integer
description: The total number of tokens used.
completion_tokens:
type: integer
description: The number of completion tokens generated.
prompt_tokens:
type: integer
description: The number of prompt tokens used.
cached_tokens:
type: integer
description: The number of tokens retrieved from cache.
required:
- total_tokens
- completion_tokens
- prompt_tokens
- cached_tokens
error:
$ref: "#/components/schemas/EvalApiError"
temperature:
type: number
description: The sampling temperature used.
max_completion_tokens:
type: integer
description: The maximum number of tokens allowed for completion.
top_p:
type: number
description: The top_p value used for sampling.
seed:
type: integer
description: The seed used for generating the sample.
required:
- input
- output
- finish_reason
- model
- usage
- error
- temperature
- max_completion_tokens
- top_p
- seed
required:
- object
- id
- run_id
- eval_id
- created_at
- status
- datasource_item_id
- datasource_item
- results
- sample
x-oaiMeta:
name: The eval run output item object
group: evals
example: >
{
"object": "eval.run.output_item",
"id": "outputitem_67abd55eb6548190bb580745d5644a33",
"run_id": "evalrun_67abd54d60ec8190832b46859da808f7",
"eval_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"created_at": 1739314509,
"status": "pass",
"datasource_item_id": 137,
"datasource_item": {
"teacher": "To grade essays, I only check for style, content, and grammar.",
"student": "I am a student who is trying to write the best essay."
},
"results": [
{
"name": "String Check Grader",
"type": "string-check-grader",
"score": 1.0,
"passed": true,
}
],
"sample": {
"input": [
{
"role": "system",
"content": "You are an evaluator bot..."
},
{
"role": "user",
"content": "You are assessing..."
}
],
"output": [
{
"role": "assistant",
"content": "The rubric is not clear nor concise."
}
],
"finish_reason": "stop",
"model": "gpt-4o-2024-08-06",
"usage": {
"total_tokens": 521,
"completion_tokens": 2,
"prompt_tokens": 519,
"cached_tokens": 0
},
"error": null,
"temperature": 1.0,
"max_completion_tokens": 2048,
"top_p": 1.0,
"seed": 42
}
}
EvalRunOutputItemList:
type: object
title: EvalRunOutputItemList
description: |
An object representing a list of output items for an evaluation run.
properties:
object:
type: string
enum:
- list
default: list
description: |
The type of this object. It is always set to "list".
x-stainless-const: true
data:
type: array
description: |
An array of eval run output item objects.
items:
$ref: "#/components/schemas/EvalRunOutputItem"
first_id:
type: string
description: The identifier of the first eval run output item in the data array.
last_id:
type: string
description: The identifier of the last eval run output item in the data array.
has_more:
type: boolean
description: Indicates whether there are more eval run output items available.
required:
- object
- data
- first_id
- last_id
- has_more
x-oaiMeta:
name: The eval run output item list object
group: evals
example: >
{
"object": "list",
"data": [
{
"object": "eval.run.output_item",
"id": "outputitem_67abd55eb6548190bb580745d5644a33",
"run_id": "evalrun_67abd54d60ec8190832b46859da808f7",
"eval_id": "eval_67abd54d9b0081909a86353f6fb9317a",
"created_at": 1739314509,
"status": "pass",
"datasource_item_id": 137,
"datasource_item": {
"teacher": "To grade essays, I only check for style, content, and grammar.",
"student": "I am a student who is trying to write the best essay."
},
"results": [
{
"name": "String Check Grader",
"type": "string-check-grader",
"score": 1.0,
"passed": true,
}
],
"sample": {
"input": [
{
"role": "system",
"content": "You are an evaluator bot..."
},
{
"role": "user",
"content": "You are assessing..."
}
],
"output": [
{
"role": "assistant",
"content": "The rubric is not clear nor concise."
}
],
"finish_reason": "stop",
"model": "gpt-4o-2024-08-06",
"usage": {
"total_tokens": 521,
"completion_tokens": 2,
"prompt_tokens": 519,
"cached_tokens": 0
},
"error": null,
"temperature": 1.0,
"max_completion_tokens": 2048,
"top_p": 1.0,
"seed": 42
}
},
],
"first_id": "outputitem_67abd55eb6548190bb580745d5644a33",
"last_id": "outputitem_67abd55eb6548190bb580745d5644a33",
"has_more": false
}
EvalScoreModelGrader:
type: object
title: ScoreModelGrader
description: >
A ScoreModelGrader object that uses a model to assign a score to the
input.
properties:
type:
type: string
enum:
- score_model
description: The object type, which is always `score_model`.
x-stainless-const: true
name:
type: string
description: The name of the grader.
model:
type: string
description: The model to use for the evaluation.
sampling_params:
type: object
description: The sampling parameters for the model.
input:
type: array
items:
$ref: "#/components/schemas/EvalItem"
description: The input text. This may include template strings.
pass_threshold:
type: number
description: The threshold for the score.
range:
type: array
items:
type: number
min_items: 2
max_items: 2
description: The range of the score. Defaults to `[0, 1]`.
required:
- type
- name
- input
- model
x-oaiMeta:
name: The eval score model grader object
group: evals
example: |
{
"type": "score_model",
"name": "Example score model grader",
"input": "{{sample.output_text}}",
"reference": "{{item.label}}",
"operation": "eq"
}
EvalStoredCompletionsDataSourceConfig:
type: object
title: StoredCompletionsDataSourceConfig
description: >
A StoredCompletionsDataSourceConfig which specifies the metadata
property of your stored completions query.
This is usually metadata like `usecase=chatbot` or `prompt-version=v2`,
etc.
The schema returned by this data source config is used to defined what
variables are available in your evals.
`item` and `sample` are both defined when using this data source config.
properties:
type:
type: string
enum:
- stored_completions
default: stored_completions
description: The type of data source. Always `stored_completions`.
x-stainless-const: true
metadata:
$ref: "#/components/schemas/Metadata"
schema:
type: object
description: |
The json schema for the run data source items.
Learn how to build JSON schemas [here](https://json-schema.org/).
additionalProperties: true
required:
- type
- schema
x-oaiMeta:
name: The stored completions data source object for evals
group: evals
example: |
{
"type": "stored_completions",
"metadata": {
"language": "english"
},
"schema": {
"type": "object",
"properties": {
"item": {
"type": "object"
},
"sample": {
"type": "object"
}
},
"required": [
"item",
"sample"
}
}
EvalStoredCompletionsSource:
type: object
title: StoredCompletionsRunDataSource
description: >
A StoredCompletionsRunDataSource configuration describing a set of
filters
properties:
type:
type: string
enum:
- stored_completions
default: stored_completions
description: The type of source. Always `stored_completions`.
x-stainless-const: true
metadata:
$ref: "#/components/schemas/Metadata"
model:
type: string
nullable: true
description: An optional model to filter by (e.g., 'gpt-4o').
created_after:
type: integer
nullable: true
description: An optional Unix timestamp to filter items created after this time.
created_before:
type: integer
nullable: true
description: An optional Unix timestamp to filter items created before this time.
limit:
type: integer
nullable: true
description: An optional maximum number of items to return.
required:
- type
x-oaiMeta:
name: The stored completions data source object used to configure an individual
run
group: eval runs
example: |
{
"type": "stored_completions",
"model": "gpt-4o",
"created_after": 1668124800,
"created_before": 1668124900,
"limit": 100,
"metadata": {}
}
EvalStringCheckGrader:
type: object
title: StringCheckGrader
description: >
A StringCheckGrader object that performs a string comparison between
input and reference using a specified operation.
properties:
type:
type: string
enum:
- string_check
description: The object type, which is always `string_check`.
x-stainless-const: true
name:
type: string
description: The name of the grader.
input:
type: string
description: The input text. This may include template strings.
reference:
type: string
description: The reference text. This may include template strings.
operation:
type: string
enum:
- eq
- ne
- like
- ilike
description: The string check operation to perform. One of `eq`, `ne`, `like`,
or `ilike`.
required:
- type
- name
- input
- reference
- operation
x-oaiMeta:
name: The eval string check grader object
group: evals
example: |
{
"type": "string_check",
"name": "Example string check grader",
"input": "{{sample.output_text}}",
"reference": "{{item.label}}",
"operation": "eq"
}
EvalTextSimilarityGrader:
type: object
title: TextSimilarityGrader
description: >
A TextSimilarityGrader object which grades text based on similarity
metrics.
properties:
type:
type: string
enum:
- text_similarity
default: text_similarity
description: The type of grader.
x-stainless-const: true
name:
type: string
description: The name of the grader.
input:
type: string
description: The text being graded.
reference:
type: string
description: The text being graded against.
pass_threshold:
type: number
description: A float score where a value greater than or equal indicates a
passing grade.
evaluation_metric:
type: string
enum:
- fuzzy_match
- bleu
- gleu
- meteor
- rouge_1
- rouge_2
- rouge_3
- rouge_4
- rouge_5
- rouge_l
description: The evaluation metric to use. One of `fuzzy_match`, `bleu`, `gleu`,
`meteor`, `rouge_1`, `rouge_2`, `rouge_3`, `rouge_4`, `rouge_5`, or
`rouge_l`.
required:
- type
- input
- reference
- pass_threshold
- evaluation_metric
x-oaiMeta:
name: The eval text similarity grader object
group: evals
example: |
{
"type": "text_similarity",
"name": "example text similarity grader",
"input": "The graded text",
"reference": "The reference text",
"pass_threshold": 0.8,
"evaluation_metric": "fuzzy_match"
}
FilePath:
type: object
title: File path
description: |
A path to a file.
properties:
type:
type: string
description: |
The type of the file path. Always `file_path`.
enum:
- file_path
x-stainless-const: true
file_id:
type: string
description: |
The ID of the file.
index:
type: integer
description: |
The index of the file in the list of files.
required:
- type
- file_id
- index
FileSearchRanker:
type: string
description: The ranker to use for the file search. If not specified will use
the `auto` ranker.
enum:
- auto
- default_2024_08_21
FileSearchRankingOptions:
title: File search tool call ranking options
type: object
description: |
The ranking options for the file search. If not specified, the file search tool will use the `auto` ranker and a score_threshold of 0.
See the [file search tool documentation](/docs/assistants/tools/file-search#customizing-file-search-settings) for more information.
properties:
ranker:
$ref: "#/components/schemas/FileSearchRanker"
score_threshold:
type: number
description: The score threshold for the file search. All values must be a
floating point number between 0 and 1.
minimum: 0
maximum: 1
required:
- score_threshold
FileSearchToolCall:
type: object
title: File search tool call
description: >
The results of a file search tool call. See the
[file search guide](/docs/guides/tools-file-search) for more
information.
properties:
id:
type: string
description: |
The unique ID of the file search tool call.
type:
type: string
enum:
- file_search_call
description: |
The type of the file search tool call. Always `file_search_call`.
x-stainless-const: true
status:
type: string
description: |
The status of the file search tool call. One of `in_progress`,
`searching`, `incomplete` or `failed`,
enum:
- in_progress
- searching
- completed
- incomplete
- failed
queries:
type: array
items:
type: string
description: |
The queries used to search for files.
results:
type: array
description: |
The results of the file search tool call.
items:
type: object
properties:
file_id:
type: string
description: |
The unique ID of the file.
text:
type: string
description: |
The text that was retrieved from the file.
filename:
type: string
description: |
The name of the file.
attributes:
$ref: "#/components/schemas/VectorStoreFileAttributes"
score:
type: number
format: float
description: |
The relevance score of the file - a value between 0 and 1.
nullable: true
required:
- id
- type
- status
- queries
FineTuneChatCompletionRequestAssistantMessage:
allOf:
- type: object
title: Assistant message
deprecated: false
properties:
weight:
type: integer
enum:
- 0
- 1
description: Controls whether the assistant message is trained against (0 or 1)
- $ref: "#/components/schemas/ChatCompletionRequestAssistantMessage"
required:
- role
FineTuneChatRequestInput:
type: object
description: The per-line training example of a fine-tuning input file for chat
models using the supervised method.
properties:
messages:
type: array
minItems: 1
items:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestSystemMessage"
- $ref: "#/components/schemas/ChatCompletionRequestUserMessage"
- $ref: "#/components/schemas/FineTuneChatCompletionRequestAssistantMessage"
- $ref: "#/components/schemas/ChatCompletionRequestToolMessage"
- $ref: "#/components/schemas/ChatCompletionRequestFunctionMessage"
tools:
type: array
description: A list of tools the model may generate JSON inputs for.
items:
$ref: "#/components/schemas/ChatCompletionTool"
parallel_tool_calls:
$ref: "#/components/schemas/ParallelToolCalls"
functions:
deprecated: true
description: A list of functions the model may generate JSON inputs for.
type: array
minItems: 1
maxItems: 128
items:
$ref: "#/components/schemas/ChatCompletionFunctions"
x-oaiMeta:
name: Training format for chat models using the supervised method
example: >
{
"messages": [
{ "role": "user", "content": "What is the weather in San Francisco?" },
{
"role": "assistant",
"tool_calls": [
{
"id": "call_id",
"type": "function",
"function": {
"name": "get_current_weather",
"arguments": "{\"location\": \"San Francisco, USA\", \"format\": \"celsius\"}"
}
}
]
}
],
"parallel_tool_calls": false,
"tools": [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and country, eg. San Francisco, USA"
},
"format": { "type": "string", "enum": ["celsius", "fahrenheit"] }
},
"required": ["location", "format"]
}
}
}
]
}
FineTuneCompletionRequestInput:
type: object
description: The per-line training example of a fine-tuning input file for
completions models
properties:
prompt:
type: string
description: The input prompt for this training example.
completion:
type: string
description: The desired completion for this training example.
x-oaiMeta:
name: Training format for completions models
example: |
{
"prompt": "What is the answer to 2+2",
"completion": "4"
}
FineTuneDPOMethod:
type: object
description: Configuration for the DPO fine-tuning method.
properties:
hyperparameters:
type: object
description: The hyperparameters used for the fine-tuning job.
properties:
beta:
description: >
The beta value for the DPO method. A higher beta value will
increase the weight of the penalty between the policy and
reference model.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: number
minimum: 0
maximum: 2
exclusiveMinimum: true
batch_size:
description: >
Number of examples in each batch. A larger batch size means that
model parameters are updated less frequently, but with lower
variance.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 256
learning_rate_multiplier:
description: >
Scaling factor for the learning rate. A smaller learning rate
may be useful to avoid overfitting.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: number
minimum: 0
exclusiveMinimum: true
n_epochs:
description: >
The number of epochs to train the model for. An epoch refers to
one full cycle through the training dataset.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 50
FineTuneMethod:
type: object
description: The method used for fine-tuning.
properties:
type:
type: string
description: The type of method. Is either `supervised` or `dpo`.
enum:
- supervised
- dpo
supervised:
$ref: "#/components/schemas/FineTuneSupervisedMethod"
dpo:
$ref: "#/components/schemas/FineTuneDPOMethod"
FineTunePreferenceRequestInput:
type: object
description: The per-line training example of a fine-tuning input file for chat
models using the dpo method.
properties:
input:
type: object
properties:
messages:
type: array
minItems: 1
items:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestSystemMessage"
- $ref: "#/components/schemas/ChatCompletionRequestUserMessage"
- $ref: "#/components/schemas/FineTuneChatCompletionRequestAssistantMessage"
- $ref: "#/components/schemas/ChatCompletionRequestToolMessage"
- $ref: "#/components/schemas/ChatCompletionRequestFunctionMessage"
tools:
type: array
description: A list of tools the model may generate JSON inputs for.
items:
$ref: "#/components/schemas/ChatCompletionTool"
parallel_tool_calls:
$ref: "#/components/schemas/ParallelToolCalls"
preferred_completion:
type: array
description: The preferred completion message for the output.
maxItems: 1
items:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestAssistantMessage"
non_preferred_completion:
type: array
description: The non-preferred completion message for the output.
maxItems: 1
items:
oneOf:
- $ref: "#/components/schemas/ChatCompletionRequestAssistantMessage"
x-oaiMeta:
name: Training format for chat models using the preference method
example: >
{
"input": {
"messages": [
{ "role": "user", "content": "What is the weather in San Francisco?" }
]
},
"preferred_completion": [
{
"role": "assistant",
"content": "The weather in San Francisco is 70 degrees Fahrenheit."
}
],
"non_preferred_completion": [
{
"role": "assistant",
"content": "The weather in San Francisco is 21 degrees Celsius."
}
]
}
FineTuneSupervisedMethod:
type: object
description: Configuration for the supervised fine-tuning method.
properties:
hyperparameters:
type: object
description: The hyperparameters used for the fine-tuning job.
properties:
batch_size:
description: >
Number of examples in each batch. A larger batch size means that
model parameters are updated less frequently, but with lower
variance.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 256
learning_rate_multiplier:
description: >
Scaling factor for the learning rate. A smaller learning rate
may be useful to avoid overfitting.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: number
minimum: 0
exclusiveMinimum: true
n_epochs:
description: >
The number of epochs to train the model for. An epoch refers to
one full cycle through the training dataset.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 50
FineTuningCheckpointPermission:
type: object
title: FineTuningCheckpointPermission
description: >
The `checkpoint.permission` object represents a permission for a
fine-tuned model checkpoint.
properties:
id:
type: string
description: The permission identifier, which can be referenced in the API
endpoints.
created_at:
type: integer
description: The Unix timestamp (in seconds) for when the permission was created.
project_id:
type: string
description: The project identifier that the permission is for.
object:
type: string
description: The object type, which is always "checkpoint.permission".
enum:
- checkpoint.permission
x-stainless-const: true
required:
- created_at
- id
- object
- project_id
x-oaiMeta:
name: The fine-tuned model checkpoint permission object
example: |
{
"object": "checkpoint.permission",
"id": "cp_zc4Q7MP6XxulcVzj4MZdwsAB",
"created_at": 1712211699,
"project_id": "proj_abGMw1llN8IrBb6SvvY5A1iH"
}
FineTuningIntegration:
type: object
title: Fine-Tuning Job Integration
required:
- type
- wandb
properties:
type:
type: string
description: The type of the integration being enabled for the fine-tuning job
enum:
- wandb
x-stainless-const: true
wandb:
type: object
description: >
The settings for your integration with Weights and Biases. This
payload specifies the project that
metrics will be sent to. Optionally, you can set an explicit display
name for your run, add tags
to your run, and set a default entity (team, username, etc) to be
associated with your run.
required:
- project
properties:
project:
description: |
The name of the project that the new run will be created under.
type: string
example: my-wandb-project
name:
description: >
A display name to set for the run. If not set, we will use the
Job ID as the name.
nullable: true
type: string
entity:
description: >
The entity to use for the run. This allows you to set the team
or username of the WandB user that you would
like associated with the run. If not set, the default entity for
the registered WandB API key is used.
nullable: true
type: string
tags:
description: >
A list of tags to be attached to the newly created run. These
tags are passed through directly to WandB. Some
default tags are generated by OpenAI: "openai/finetune",
"openai/{base-model}", "openai/{ftjob-abcdef}".
type: array
items:
type: string
example: custom-tag
FineTuningJob:
type: object
title: FineTuningJob
description: >
The `fine_tuning.job` object represents a fine-tuning job that has been
created through the API.
properties:
id:
type: string
description: The object identifier, which can be referenced in the API endpoints.
created_at:
type: integer
description: The Unix timestamp (in seconds) for when the fine-tuning job was
created.
error:
type: object
nullable: true
description: For fine-tuning jobs that have `failed`, this will contain more
information on the cause of the failure.
properties:
code:
type: string
description: A machine-readable error code.
message:
type: string
description: A human-readable error message.
param:
type: string
description: The parameter that was invalid, usually `training_file` or
`validation_file`. This field will be null if the failure was
not parameter-specific.
nullable: true
required:
- code
- message
- param
fine_tuned_model:
type: string
nullable: true
description: The name of the fine-tuned model that is being created. The value
will be null if the fine-tuning job is still running.
finished_at:
type: integer
nullable: true
description: The Unix timestamp (in seconds) for when the fine-tuning job was
finished. The value will be null if the fine-tuning job is still
running.
hyperparameters:
type: object
description: The hyperparameters used for the fine-tuning job. This value will
only be returned when running `supervised` jobs.
properties:
batch_size:
description: >
Number of examples in each batch. A larger batch size means that
model parameters
are updated less frequently, but with lower variance.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 256
learning_rate_multiplier:
description: >
Scaling factor for the learning rate. A smaller learning rate
may be useful to avoid
overfitting.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: number
minimum: 0
exclusiveMinimum: true
n_epochs:
description: >
The number of epochs to train the model for. An epoch refers to
one full cycle
through the training dataset.
oneOf:
- type: string
enum:
- auto
x-stainless-const: true
- type: integer
minimum: 1
maximum: 50
model:
type: string
description: The base model that is being fine-tuned.
object:
type: string
description: The object type, which is always "fine_tuning.job".
enum:
- fine_tuning.job
x-stainless-const: true
organization_id:
type: string
description: The organization that owns the fine-tuning job.
result_files:
type: array
description: The compiled results file ID(s) for the fine-tuning job. You can
retrieve the results with the [Files
API](/docs/api-reference/files/retrieve-contents).
items:
type: string
example: file-abc123
status:
type: string
description: The current status of the fine-tuning job, which can be either
`validating_files`, `queued`, `running`, `succeeded`, `failed`, or
`cancelled`.
enum:
- validating_files
- queued
- running
- succeeded
- failed
- cancelled
trained_tokens:
type: integer
nullable: true
description: The total number of billable tokens processed by this fine-tuning
job. The value will be null if the fine-tuning job is still running.
training_file:
type: string
description: The file ID used for training. You can retrieve the training data
with the [Files API](/docs/api-reference/files/retrieve-contents).
validation_file:
type: string
nullable: true
description: The file ID used for validation. You can retrieve the validation
results with the [Files
API](/docs/api-reference/files/retrieve-contents).
integrations:
type: array
nullable: true
description: A list of integrations to enable for this fine-tuning job.
maxItems: 5
items:
oneOf:
- $ref: "#/components/schemas/FineTuningIntegration"
seed:
type: integer
description: The seed used for the fine-tuning job.
estimated_finish:
type: integer
nullable: true
description: The Unix timestamp (in seconds) for when the fine-tuning job is
estimated to finish. The value will be null if the fine-tuning job
is not running.
method:
$ref: "#/components/schemas/FineTuneMethod"
metadata:
$ref: "#/components/schemas/Metadata"
required:
- created_at
- error
- finished_at
- fine_tuned_model
- hyperparameters
- id
- model
- object
- organization_id
- result_files
- status
- trained_tokens
- training_file
- validation_file
- seed
x-oaiMeta:
name: The fine-tuning job object
example: |
{
"object": "fine_tuning.job",
"id": "ftjob-abc123",
"model": "davinci-002",
"created_at": 1692661014,
"finished_at": 1692661190,
"fine_tuned_model": "ft:davinci-002:my-org:custom_suffix:7q8mpxmy",
"organization_id": "org-123",
"result_files": [
"file-abc123"
],
"status": "succeeded",
"validation_file": null,
"training_file": "file-abc123",
"hyperparameters": {
"n_epochs": 4,
"batch_size": 1,
"learning_rate_multiplier": 1.0
},
"trained_tokens": 5768,
"integrations": [],
"seed": 0,
"estimated_finish": 0,
"method": {
"type": "supervised",
"supervised": {
"hyperparameters": {
"n_epochs": 4,
"batch_size": 1,
"learning_rate_multiplier": 1.0
}
}
},
"metadata": {
"key": "value"
}
}
FineTuningJobCheckpoint:
type: object
title: FineTuningJobCheckpoint
description: >
The `fine_tuning.job.checkpoint` object represents a model checkpoint
for a fine-tuning job that is ready to use.
properties:
id:
type: string
description: The checkpoint identifier, which can be referenced in the API
endpoints.
created_at:
type: integer
description: The Unix timestamp (in seconds) for when the checkpoint was created.
fine_tuned_model_checkpoint:
type: string
description: The name of the fine-tuned checkpoint model that is created.
step_number:
type: integer
description: The step number that the checkpoint was created at.
metrics:
type: object
description: Metrics at the step number during the fine-tuning job.
properties:
step:
type: number
train_loss:
type: number
train_mean_token_accuracy:
type: number
valid_loss:
type: number
valid_mean_token_accuracy:
type: number
full_valid_loss:
type: number
full_valid_mean_token_accuracy:
type: number
fine_tuning_job_id:
type: string
description: The name of the fine-tuning job that this checkpoint was created
from.
object:
type: string
description: The object type, which is always "fine_tuning.job.checkpoint".
enum:
- fine_tuning.job.checkpoint
x-stainless-const: true
required:
- created_at
- fine_tuning_job_id
- fine_tuned_model_checkpoint
- id
- metrics
- object
- step_number
x-oaiMeta:
name: The fine-tuning job checkpoint object
example: >
{
"object": "fine_tuning.job.checkpoint",
"id": "ftckpt_qtZ5Gyk4BLq1SfLFWp3RtO3P",
"created_at": 1712211699,
"fine_tuned_model_checkpoint": "ft:gpt-4o-mini-2024-07-18:my-org:custom_suffix:9ABel2dg:ckpt-step-88",
"fine_tuning_job_id": "ftjob-fpbNQ3H1GrMehXRf8cO97xTN",
"metrics": {
"step": 88,
"train_loss": 0.478,
"train_mean_token_accuracy": 0.924,
"valid_loss": 10.112,
"valid_mean_token_accuracy": 0.145,
"full_valid_loss": 0.567,
"full_valid_mean_token_accuracy": 0.944
},
"step_number": 88
}
FineTuningJobEvent:
type: object
description: Fine-tuning job event object
properties:
object:
type: string
description: The object type, which is always "fine_tuning.job.event".
enum:
- fine_tuning.job.event
x-stainless-const: true
id:
type: string
description: The object identifier.
created_at:
type: integer
description: The Unix timestamp (in seconds) for when the fine-tuning job was
created.
level:
type: string
description: The log level of the event.
enum:
- info
- warn
- error
message:
type: string
description: The message of the event.
type:
type: string
description: The type of event.
enum:
- message
- metrics
data:
type: object
description: The data associated with the event.
required:
- id
- object
- created_at
- level
- message
x-oaiMeta:
name: The fine-tuning job event object
example: |
{
"object": "fine_tuning.job.event",
"id": "ftevent-abc123"
"created_at": 1677610602,
"level": "info",
"message": "Created fine-tuning job",
"data": {},
"type": "message"
}
FunctionObject:
type: object
properties:
description:
type: string
description: A description of what the function does, used by the model to
choose when and how to call the function.
name:
type: string
description: The name of the function to be called. Must be a-z, A-Z, 0-9, or
contain underscores and dashes, with a maximum length of 64.
parameters:
$ref: "#/components/schemas/FunctionParameters"
strict:
type: boolean
nullable: true
default: false
description: Whether to enable strict schema adherence when generating the
function call. If set to true, the model will follow the exact
schema defined in the `parameters` field. Only a subset of JSON
Schema is supported when `strict` is `true`. Learn more about
Structured Outputs in the [function calling
guide](docs/guides/function-calling).
required:
- name
FunctionParameters:
type: object
description: >-
The parameters the functions accepts, described as a JSON Schema object.
See the [guide](/docs/guides/function-calling) for examples, and the
[JSON Schema
reference](https://json-schema.org/understanding-json-schema/) for
documentation about the format.
Omitting `parameters` defines a function with an empty parameter list.
additionalProperties: true
FunctionToolCall:
type: object
title: Function tool call
description: >
A tool call to run a function. See the
[function calling guide](/docs/guides/function-calling) for more
information.
properties:
id:
type: string
description: |
The unique ID of the function tool call.
type:
type: string
enum:
- function_call
description: |
The type of the function tool call. Always `function_call`.
x-stainless-const: true
call_id:
type: string
description: |
The unique ID of the function tool call generated by the model.
name:
type: string
description: |
The name of the function to run.
arguments:
type: string
description: |
A JSON string of the arguments to pass to the function.
status:
type: string
description: |
The status of the item. One of `in_progress`, `completed`, or
`incomplete`. Populated when items are returned via API.
enum:
- in_progress
- completed
- incomplete
required:
- type
- call_id
- name
- arguments
FunctionToolCallOutput:
type: object
title: Function tool call output
description: |
The output of a function tool call.
properties:
id:
type: string
description: >
The unique ID of the function tool call output. Populated when this
item
is returned via API.
type:
type: string
enum:
- function_call_output
description: >
The type of the function tool call output. Always
`function_call_output`.
x-stainless-const: true
call_id:
type: string
description: |
The unique ID of the function tool call generated by the model.
output:
type: string
description: |
A JSON string of the output of the function tool call.
status:
type: string
description: |
The status of the item. One of `in_progress`, `completed`, or
`incomplete`. Populated when items are returned via API.
enum:
- in_progress
- completed
- incomplete
required:
- type
- call_id
- output
FunctionToolCallOutputResource:
allOf:
- $ref: "#/components/schemas/FunctionToolCallOutput"
- type: object
properties:
id:
type: string
description: |
The unique ID of the function call tool output.
required:
- id
FunctionToolCallResource:
allOf:
- $ref: "#/components/schemas/FunctionToolCall"
- type: object
properties:
id:
type: string
description: |
The unique ID of the function tool call.
required:
- id
Image:
type: object
description: Represents the content or the URL of an image generated by the
OpenAI API.
properties:
b64_json:
type: string
description: The base64-encoded JSON of the generated image. Default value for
`gpt-image-1`, and only present if `response_format` is set to
`b64_json` for `dall-e-2` and `dall-e-3`.
url:
type: string
description: When using `dall-e-2` or `dall-e-3`, the URL of the generated image
if `response_format` is set to `url` (default value). Unsupported
for `gpt-image-1`.
revised_prompt:
type: string
description: For `dall-e-3` only, the revised prompt that was used to generate
the image.
ImagesResponse:
type: object
title: Image generation response
description: The response from the image generation endpoint.
properties:
created:
type: integer
description: The Unix timestamp (in seconds) of when the image was created.
data:
type: array
description: The list of generated images.
items:
$ref: "#/components/schemas/Image"
usage:
type: object
description: >
For `gpt-image-1` only, the token usage information for the image
generation.
required:
- total_tokens
- input_tokens
- output_tokens
- input_tokens_details
properties:
total_tokens:
type: integer
description: The total number of tokens (images and text) used for the image
generation.
input_tokens:
type: integer
description: The number of tokens (images and text) in the input prompt.
output_tokens:
type: integer
description: The number of image tokens in the output image.
input_tokens_details:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment