This took me a while to get going to pass the "api-version": "2025-01-01-preview" and then I kept hitting MAX TOKEN LENGTH issues when using a smaller model.
- Configure the provider in your
opencode.jsonfile:"I did this is my WSL instance"
mkdir -p ~/.config/opencode/ touch ~/.config/opencode/opencode.json code ~/.config/opencode/opencode.json
{ "$schema": "https://opencode.ai/config.json", "provider": { "azure-foundry": { "npm": "@ai-sdk/openai-compatible", "name": "Azure Foundry", "options": { "baseURL": "https://<MY_AI_FOUNDRY_INSTANCE>.cognitiveservices.azure.com/openai/deployments/<MY_DEPLOYMENT_NAME>/", "queryParams": { "api-version": "2025-01-01-preview" } }, "models": { "MY_MODEL": { "name": "<CUSTOM_MODEL_DISPLAY_NAME_IN_OPENCODE>" } } } } } - To use this configuration in my
devcontainer.jsonI add a mount to map this file.{ .... , "mounts": [ "source=${localEnv:HOME}/.config/opencode,target=/home/node/.config/opencode,type=bind", ...... ], ...... } - Assuming you have opencode installed
npm install -g opencode-aiin your devcontainer runopencode auth login- Arrow key up to
other - enter
azure-foundry - enter
YOUR_SECURE_API_TOKEN - Run
/modelsinopencodeto select your Azure AI Foundry model
- Arrow key up to
🎉 Happy coding!

If anyone can figure out how to limit the max tokens sent to a model in opencode please let me know.
I want to play with some of the smaller models locally.