If you've been trying to get OpenAI's Codex CLI working with Azure OpenAI Service, you're not alone in facing configuration headaches. The ongoing transition at Azure, combined with inconsistent documentation and API version differences, can make this setup feel like navigating a maze blindfolded.
After countless hours of debugging 404 errors, stream failures, and authentication issues, here's a working configuration that actually works with Azure AI Foundry and GPT-4o.
The Codex CLI documentation provides a basic Azure configuration example, but the reality is more complex:
- Azure AI Services and Azure AI Foundry have different API endpoints and versions
- Deployment names vs. model names cause confusion
- Stream handling differs between Azure and OpenAI
- API versions in different Azure portals don't always match
Here's the configuration that works with Azure AI Foundry and GPT-4o using Codex CLI version 0.7.0:
model_provider = "azure"
provider = "azure"
model = "<deployment name>" # Your deployment name, NOT the AI model name
[model_providers.azure]
name = "Azure OpenAI"
base_url = "https://<resource>.openai.azure.us/openai/deployments/<deployment name>"
env_key = "AZURE_OPENAI_API_KEY"
query_params = { api-version = "2025-01-01-preview" }
The model
field should be your deployment name (e.g., "my-gpt4o-deployment"), not the underlying model name (e.g., "gpt-4o"). This is a common source of confusion.
Unlike the standard documentation suggests, include the full deployment path in your base_url
:
https://<resource>.openai.azure.us/openai/deployments/<deployment name>
The API version should come from Azure AI Foundry, not Azure AI Services. In AI Foundry, look for the endpoint URL format:
https://<resource>.openai.azure.us/openai/deployments/<deployment>/chat/completions?api-version=2025-01-01-preview
Use that exact API version in your configuration.
The wire_api = "responses"
parameter that works with other models doesn't seem to work reliably with GPT-4o deployments. Leave it out.
Don't forget to set your API key:
export AZURE_OPENAI_API_KEY="your-api-key-here"
Once configured, test with a simple command:
codex "explain what this directory contains"
Still getting 404s?
- Double-check your deployment name in Azure AI Foundry
- Verify your resource name in the URL
- Ensure the API version matches what's shown in AI Foundry
Stream errors?
- Remove any
wire_api
configurations - Try the latest API version from AI Foundry
Authentication failures?
- Verify your API key has the correct permissions
- Check that you're using the right environment variable name
This setup works because:
- Correct URL structure: Includes the full deployment path that Azure expects
- Proper API version: Uses the current version from AI Foundry (not the outdated one from AI Services)
- Standard streaming: Avoids response API complications with GPT-4o
- Deployment-focused: Uses deployment names as Azure requires
The Azure OpenAI + Codex CLI integration isn't as straightforward as it could be, largely due to the ongoing Azure platform transitions. The key is using Azure AI Foundry (not Azure AI Services) as your source of truth for endpoints and API versions.
With this configuration, you can finally use Codex CLI's powerful features—interactive coding, file manipulation, and automated development workflows—backed by your Azure OpenAI deployments.
Have you found other configuration quirks or solutions? The Codex CLI is still in active development, and community contributions help everyone navigate these integration challenges more smoothly.
Thanks for sharing, but it still doesn't work for me...