Skip to content

Instantly share code, notes, and snippets.

@markuskreitzer
Created July 16, 2025 18:20
Show Gist options
  • Save markuskreitzer/d7f420013858564d6da8fc0895a88334 to your computer and use it in GitHub Desktop.
Save markuskreitzer/d7f420013858564d6da8fc0895a88334 to your computer and use it in GitHub Desktop.
How to get OpenAI Codex working with Azure

Configuring OpenAI Codex CLI with Azure OpenAI: A Working Solution

If you've been trying to get OpenAI's Codex CLI working with Azure OpenAI Service, you're not alone in facing configuration headaches. The ongoing transition at Azure, combined with inconsistent documentation and API version differences, can make this setup feel like navigating a maze blindfolded.

After countless hours of debugging 404 errors, stream failures, and authentication issues, here's a working configuration that actually works with Azure AI Foundry and GPT-4o.

The Challenge

The Codex CLI documentation provides a basic Azure configuration example, but the reality is more complex:

  • Azure AI Services and Azure AI Foundry have different API endpoints and versions
  • Deployment names vs. model names cause confusion
  • Stream handling differs between Azure and OpenAI
  • API versions in different Azure portals don't always match

The Working Solution

Here's the configuration that works with Azure AI Foundry and GPT-4o using Codex CLI version 0.7.0:

model_provider = "azure"
provider = "azure"
model = "<deployment name>"  # Your deployment name, NOT the AI model name

[model_providers.azure]
name         = "Azure OpenAI"
base_url     = "https://<resource>.openai.azure.us/openai/deployments/<deployment name>"
env_key      = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-01-01-preview" }

Key Configuration Insights

1. Use Your Deployment Name, Not Model Name

The model field should be your deployment name (e.g., "my-gpt4o-deployment"), not the underlying model name (e.g., "gpt-4o"). This is a common source of confusion.

2. Include Deployment Name in Base URL

Unlike the standard documentation suggests, include the full deployment path in your base_url:

https://<resource>.openai.azure.us/openai/deployments/<deployment name>

3. Get API Version from AI Foundry

The API version should come from Azure AI Foundry, not Azure AI Services. In AI Foundry, look for the endpoint URL format:

https://<resource>.openai.azure.us/openai/deployments/<deployment>/chat/completions?api-version=2025-01-01-preview

Use that exact API version in your configuration.

4. Skip wire_api for GPT-4o

The wire_api = "responses" parameter that works with other models doesn't seem to work reliably with GPT-4o deployments. Leave it out.

Environment Setup

Don't forget to set your API key:

export AZURE_OPENAI_API_KEY="your-api-key-here"

Testing Your Configuration

Once configured, test with a simple command:

codex "explain what this directory contains"

Troubleshooting Tips

Still getting 404s?

  • Double-check your deployment name in Azure AI Foundry
  • Verify your resource name in the URL
  • Ensure the API version matches what's shown in AI Foundry

Stream errors?

  • Remove any wire_api configurations
  • Try the latest API version from AI Foundry

Authentication failures?

  • Verify your API key has the correct permissions
  • Check that you're using the right environment variable name

Why This Configuration Works

This setup works because:

  1. Correct URL structure: Includes the full deployment path that Azure expects
  2. Proper API version: Uses the current version from AI Foundry (not the outdated one from AI Services)
  3. Standard streaming: Avoids response API complications with GPT-4o
  4. Deployment-focused: Uses deployment names as Azure requires

The Bottom Line

The Azure OpenAI + Codex CLI integration isn't as straightforward as it could be, largely due to the ongoing Azure platform transitions. The key is using Azure AI Foundry (not Azure AI Services) as your source of truth for endpoints and API versions.

With this configuration, you can finally use Codex CLI's powerful features—interactive coding, file manipulation, and automated development workflows—backed by your Azure OpenAI deployments.


Have you found other configuration quirks or solutions? The Codex CLI is still in active development, and community contributions help everyone navigate these integration challenges more smoothly.

@chenxizhang
Copy link

Thanks for sharing, but it still doesn't work for me...

@markuskreitzer
Copy link
Author

Bummer. It's been a royal pain.

@chenxizhang
Copy link

I found the problem and submitted a PR here openai/codex#1693 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment