This is a quick copy-paste-observe guide for people in a hurry to quickly set up a local coding agent at no additional costs using OpenCode and Claude Code.
- Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh- Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8bWarning
Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.
Warning
Note that, only certain LLMs support tooling needed by Claude Code. You can go with the following recommended local models: glm-4.7-flash - 19GB qwen3:8b - 11B
- Install Claude Code using the command below:
curl -fsSL https://claude.ai/install.sh | bashNote
In terms of higher privacy, OpenCode is potentially a better fit due to Claude Code's limitation of web search only allowed for paid Claude accounts. Please refer to this gist on how to set up with OpenCode instead.
ollama launch claudeFor ease of use, you can launch Claude Code with models pre-defined to skip the manual input:
ollama launch claude --model glm-4.7-flash:latestWarning
To allow full autonomy of coding agent in order to speed up development, use the following bash command at your own risk:
ollama launch claude --model glm-4.7-flash:latest -- --dangerously-skip-permissions- Claude Code Installation - https://code.claude.com/docs/en/quickstart
- Ollama Installation - https://ollama.com/
- Open-WebUI Installation - https://github.com/open-webui/open-webui
If you are running low on memory storage space, you can use an external SSD storage or optical hard drive after setting a new path for
OLLAMA_MODELS:.servicefile:References