The following guide will show you how to connect a local model served with MLX to OpenCode for local coding.
1. Install OpenCode
curl -fsSL https://opencode.ai/install | bash
2. Install mlx-lm
pip install mlx-lm
Open ~/.config/opencode/opencode.json and past the following (if you already have a config just add the MLX provider):
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"mlx": {
"npm": "@ai-sdk/openai-compatible",
"name": "MLX (local)",
"options": {
"baseURL": "http://127.0.0.1:8080/v1"
},
"models": {
"mlx-community/NVIDIA-Nemotron-3-Nano-30B-A3B-4bit": {
"name": "Nemotron 3 Nano"
}
}
}
}
}mlx_lm.serverIn the repo you plan to work, type opencode.
Once inside the OpenCode TUI:
- Enter
/connect - Type
MLXand select it - For the API key enter
none - Select the model
- Start planning and building
what if already downloaded, how to server from local repo ? (and whit what name ? )
mlx_lm.server --model /Volumes//huggingface/hub/mlx-community/Qwen3.6-35B-A3B-nvfp4/ --host 0.0.0.0 --
it's still trying to redownload from huggingface instead of using local folder or searching in another location
/users/.cache/huggingface/hub/models--mlx-community--Qwen3.6-35B-A3B-nvfp4/snapshots/9c1a3a223ddd8a3425212cc421056614f149cf0f
with error Not Found: No safetensors found ; but safetensors are in the provided --model folder