This is a quick copy-paste-observe guide for people in a hurry to quickly set up a local coding agent at no additional costs using OpenCode and Ollama.
- Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh- Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8bWarning
Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.
- Install OpenCode using the command below:
curl -fsSL https://opencode.ai/install | bashollama launch opencodeFor ease of use, you can start up opencode with models defined.
ollama launch opencode --model glm-4.7-flash:latestWarning
To allow full autonomy of coding agent in order to speed up development, use the following bash command at your own risk:
ollama launch opencode --model glm-4.7-flash:latest- OpenCode Installation - https://opencode.ai/
- Ollama Installation - https://ollama.com/
- Open-WebUI Installation - https://github.com/open-webui/open-webui