This document serves as a comprehensive development guide for implementing configurable settings in the opencode-openai-codex-auth plugin. It outlines the configuration settings that will be supported for all GPT-5 family models when accessed through ChatGPT OAuth (subscription-based) authentication.
This plugin provides access to the following models via ChatGPT Plus/Pro subscription:
gpt-5-codex- Optimized for coding tasks with built-in code-aware reasoninggpt-5- General-purpose reasoning model for complex tasks
The plugin may also handle variant models like gpt-5-nano or gpt-5-mini if they become available, though these are not currently in official Codex CLI presets.
This plugin follows the Codex CLI pattern for configuration settings because:
- We're hitting the same backend API used by ChatGPT (not OpenAI Platform API)
- The Codex CLI has been thoroughly tested against this backend
- opencode explicitly excludes
gpt-5-codexfrom many automatic configurations - Settings must work consistently across both
gpt-5andgpt-5-codexmodels
Important: While opencode has its own conventions for OpenAI models accessed via API key, we intentionally follow Codex CLI's approach for OAuth-based access. This is documented in code comments and user-facing documentation.
Goal: Replace hard-coded values with a flexible configuration system that respects user preferences while maintaining sensible defaults from Codex CLI.
Controls how much computational effort the model dedicates to reasoning. Applies to all GPT-5 family models (gpt-5, gpt-5-codex, and variants).
Codex CLI Support:
- Available values:
minimal,low,medium(default),high - Source:
/codex-rs/protocol/src/config_types.rs:13-19 - Model presets:
/codex-rs/common/src/model_presets.rs:20-68gpt-5: Supports minimal/low/medium/highgpt-5-codex: Supports low/medium/high (no minimal preset)
opencode Behavior:
- Sets
"medium"for regulargpt-5models - Explicitly excludes
gpt-5-codexfrom automatic reasoning effort configuration - Source:
/packages/opencode/src/provider/transform.ts:95
Plugin Implementation:
- Current: Hardcoded to
"high"for non-lightweight models,"minimal"for nano/mini variants - Current location:
lib/request-transformer.mjs:31 - Proposed: Default to
"medium"for all models, allow per-model user configuration - Supported values:
minimal,low,medium,high
Configuration Examples:
Per-model configuration:
{
"provider": {
"openai": {
"models": {
"gpt-5-codex": {
"options": {
"reasoningEffort": "medium"
}
},
"gpt-5": {
"options": {
"reasoningEffort": "high"
}
}
}
}
}
}Global configuration (applies to all models):
{
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium"
}
}
}
}Controls the verbosity of reasoning summaries returned by the model. Applies to all GPT-5 family models.
Codex CLI Support:
- Available values:
auto(default),concise,detailed,none - Source:
/codex-rs/protocol/src/config_types.rs:27-34 - Applies to all GPT-5 models when reasoning is enabled
opencode Behavior:
- Sets
"detailed"only for the"opencode"provider (opencode zen hosted service) - No automatic configuration for OpenAI API key access
- Source:
/packages/opencode/src/provider/transform.ts:102
Plugin Implementation:
- Current: Hardcoded to
"detailed"for all models - Current location:
lib/request-transformer.mjs:32 - Proposed: Default to
"auto", allow per-model user configuration - Supported values:
auto,concise,detailed,none
Configuration Examples:
Per-model configuration:
{
"provider": {
"openai": {
"models": {
"gpt-5-codex": {
"options": {
"reasoningSummary": "auto"
}
},
"gpt-5": {
"options": {
"reasoningSummary": "concise"
}
}
}
}
}
}Global configuration:
{
"provider": {
"openai": {
"options": {
"reasoningSummary": "auto"
}
}
}
}Controls the output length and detail level for GPT-5 models via the Responses API. Applies to all GPT-5 family models.
Codex CLI Support:
- Available values:
low,medium(default),high - Source:
/codex-rs/protocol/src/config_types.rs:41-46 - Applies to all GPT-5 family models
opencode Behavior:
- Sets
"medium"forgpt-5-codex - Sets
"low"for regulargpt-5(non-codex) - Skipped for Azure provider
- Source:
/packages/opencode/src/provider/transform.ts:97
Plugin Implementation:
- Current: Hardcoded to
"medium"for all models (matches Codex CLI default) - Current location:
lib/request-transformer.mjs:109 - Proposed: Keep
"medium"as default, but support all Codex CLI options with per-model configuration - Supported values:
low,medium,high
Configuration Examples:
Per-model configuration:
{
"provider": {
"openai": {
"models": {
"gpt-5-codex": {
"options": {
"textVerbosity": "medium"
}
},
"gpt-5": {
"options": {
"textVerbosity": "low"
}
}
}
}
}
}Global configuration:
{
"provider": {
"openai": {
"options": {
"textVerbosity": "medium"
}
}
}
}Includes encrypted reasoning content in responses to maintain reasoning context across conversation turns. Applies to all GPT-5 family models when reasoning is enabled.
What is Encrypted Reasoning Content?
When using the Responses API with store: false (stateless operation), reasoning items cannot be persisted server-side. To maintain reasoning context across multi-turn conversations:
- The API returns an encrypted version of reasoning tokens via
reasoning.encrypted_content - These encrypted tokens are passed back in subsequent requests
- They're decrypted in-memory (never written to disk), used for the next response, then discarded
- New reasoning tokens are encrypted and returned, ensuring no state is ever persisted
This is particularly important for:
- Zero Data Retention (ZDR) organizations
- Stateless operation (
store: false) - Privacy-conscious deployments
Codex CLI Support:
- Automatically sets
["reasoning.encrypted_content"]when reasoning is enabled - Applies to both
gpt-5andgpt-5-codexmodels - Source:
/codex-rs/core/src/client.rs:189-193
opencode Behavior:
- Sets
["reasoning.encrypted_content"]only for the"opencode"provider (opencode zen) - Not set for OpenAI API key access (uses
store: trueby default) - Reason: opencode zen uses
store: false(stateless), same as ChatGPT backend - Source:
/packages/opencode/src/provider/transform.ts:101
Plugin Implementation:
- Current: NOT implemented (missing feature)
- Current location: Should be added to
lib/request-transformer.mjstransformation - Proposed: Default to
["reasoning.encrypted_content"]for all models - Why we need this: This plugin sets
body.store = false(line 89 oflib/request-transformer.mjs), making it stateless just like opencode zen. Encrypted reasoning content is necessary for maintaining reasoning context across turns.
Configuration Examples:
Per-model configuration (typically not needed - use default):
{
"provider": {
"openai": {
"models": {
"gpt-5-codex": {
"options": {
"include": ["reasoning.encrypted_content"]
}
},
"gpt-5": {
"options": {
"include": ["reasoning.encrypted_content"]
}
}
}
}
}
}Global configuration:
{
"provider": {
"openai": {
"options": {
"include": ["reasoning.encrypted_content"]
}
}
}
}Note: In most cases, the default value should be sufficient. This option is provided for advanced use cases where encrypted reasoning might need to be disabled.
This section provides a comprehensive development plan for implementing configurable settings across all GPT-5 family models.
These defaults apply to all GPT-5 family models unless overridden by user configuration:
{
reasoningEffort: "medium", // Changed from "high" (Codex CLI default)
reasoningSummary: "auto", // Changed from "detailed" (Codex CLI default)
textVerbosity: "medium", // Unchanged (already matches Codex CLI default)
include: ["reasoning.encrypted_content"] // NEW - required for stateless reasoning
}Model-Specific Considerations:
gpt-5: Supports all reasoning efforts (minimal/low/medium/high)gpt-5-codex: Supports low/medium/high reasoning efforts (no minimal preset in Codex CLI)gpt-5-nano,gpt-5-mini: If they exist, should default toreasoningEffort: "minimal"- All verbosity and summary settings apply uniformly across models
Current state: Loader receives _provider parameter but ignores it (underscore prefix)
Required changes:
auth: {
provider: "openai", // Use same provider as API key auth (Anthropic pattern)
async loader(getAuth, provider) { // Remove underscore - we need this!
const auth = await getAuth();
// Only handle OAuth - API key uses default opencode behavior
if (auth.type !== "oauth") {
return {};
}
// Extract user config from provider structure
// Supports both global options and per-model options
const globalOptions = provider?.options || {};
// Build config object that can be passed to transformer
const userConfig = {
global: globalOptions,
models: provider?.models || {}
};
// Pass userConfig through to fetch() wrapper
// ... rest of OAuth logic
}
}File: lib/request-transformer.mjs
Required changes:
a. Update getReasoningConfig() signature:
/**
* Configure reasoning parameters based on model variant and user config
* @param {string} originalModel - Original model name before normalization
* @param {object} userConfig - User configuration object
* @returns {object} Reasoning configuration
*/
export function getReasoningConfig(originalModel, userConfig = {}) {
const isLightweight = originalModel?.includes("nano") || originalModel?.includes("mini");
// Default based on model type
const defaultEffort = isLightweight ? "minimal" : "medium";
return {
effort: userConfig.reasoningEffort || defaultEffort,
summary: userConfig.reasoningSummary || "auto", // Changed from "detailed"
};
}b. Add helper to extract model-specific config:
/**
* Extract configuration for a specific model
* Merges global options with model-specific options (model-specific takes precedence)
* @param {string} modelName - Model name (e.g., "gpt-5-codex")
* @param {object} userConfig - Full user configuration object
* @returns {object} Merged configuration for this model
*/
export function getModelConfig(modelName, userConfig = {}) {
const globalOptions = userConfig.global || {};
const modelOptions = userConfig.models?.[modelName]?.options || {};
// Model-specific options override global options
return { ...globalOptions, ...modelOptions };
}c. Update transformRequestBody() signature and implementation:
/**
* Transform request body for Codex API
* @param {object} body - Original request body
* @param {string} codexInstructions - Codex system instructions
* @param {object} userConfig - User configuration from loader
* @returns {object} Transformed request body
*/
export function transformRequestBody(body, codexInstructions, userConfig = {}) {
const originalModel = body.model;
const normalizedModel = normalizeModel(body.model);
// Get model-specific configuration
const modelConfig = getModelConfig(normalizedModel, userConfig);
// Normalize model name
body.model = normalizedModel;
// Codex required fields
body.store = false;
body.stream = true;
body.instructions = codexInstructions;
// Filter and transform input
if (body.input && Array.isArray(body.input)) {
body.input = filterInput(body.input);
body.input = addToolRemapMessage(body.input, !!body.tools);
}
// Configure reasoning (use model-specific config)
const reasoningConfig = getReasoningConfig(originalModel, modelConfig);
body.reasoning = {
...body.reasoning,
...reasoningConfig,
};
// Configure text verbosity (support user config)
body.text = {
...body.text,
verbosity: modelConfig.textVerbosity || "medium",
};
// Add include for encrypted reasoning content
body.include = modelConfig.include || ["reasoning.encrypted_content"];
// Remove unsupported parameters
body.max_output_tokens = undefined;
body.max_completion_tokens = undefined;
return body;
}Required changes: Pass userConfig through to transformRequestBody()
// Inside custom fetch() implementation
const transformedBody = transformRequestBody(
parsedBody,
codexInstructions,
userConfig // Pass through from loader
);Files to update:
README.md: Add configuration examples for all modelsCLAUDE.md: Document new configuration system architectureSETTINGS.md: This file serves as the implementation guide
Apply same settings to all GPT-5 models:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium",
"include": ["reasoning.encrypted_content"]
}
}
}
}Different settings for different models:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"models": {
"gpt-5-codex": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "concise",
"textVerbosity": "medium"
}
},
"gpt-5": {
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "low"
}
}
}
}
}
}Global defaults with model-specific overrides:
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium"
},
"models": {
"gpt-5-codex": {
"options": {
"reasoningSummary": "concise"
}
}
}
}
}
}In this example:
gpt-5-codexuses:reasoningEffort: "medium",reasoningSummary: "concise"(override),textVerbosity: "medium"gpt-5uses:reasoningEffort: "medium",reasoningSummary: "auto",textVerbosity: "medium"
After implementation, verify:
- Global options apply to all models when no model-specific config exists
- Model-specific options override global options
- Default values match Codex CLI defaults when no config provided
-
include: ["reasoning.encrypted_content"]is sent for all models by default - Configuration works for both
gpt-5andgpt-5-codexmodels - Invalid configuration values are handled gracefully
- Configuration is properly documented in README.md
- CLAUDE.md reflects new architecture
- Configuration types:
/codex-rs/protocol/src/config_types.rs - Model presets:
/codex-rs/common/src/model_presets.rs - Client implementation:
/codex-rs/core/src/client.rs - Verbosity handling:
/codex-rs/core/src/client_common.rs
- Provider transforms:
/packages/opencode/src/provider/transform.ts - Shows opencode's approach for different providers and models