Skip to content

Instantly share code, notes, and snippets.

@cardboardcode
Last active March 17, 2026 05:34
Show Gist options
  • Select an option

  • Save cardboardcode/3391a501fc9a1066485f61dc6e06d6dc to your computer and use it in GitHub Desktop.

Select an option

Save cardboardcode/3391a501fc9a1066485f61dc6e06d6dc to your computer and use it in GitHub Desktop.
For People In A Hurry: How to Set Ollama with OpenCode for Local & Privacy-Focused Coding Agent

What Is This?

This is a quick copy-paste-observe guide for people in a hurry to quickly set up a local coding agent at no additional costs using OpenCode and Ollama.

Build

  1. Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh
  1. Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8b

Warning

Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.

  1. Install OpenCode using the command below:
curl -fsSL https://opencode.ai/install | bash

Run

ollama launch opencode

For ease of use, you can start up opencode with models defined.

ollama launch opencode --model glm-4.7-flash:latest

Warning

To allow full autonomy of coding agent in order to speed up development, use the following bash command at your own risk:

ollama launch opencode --model glm-4.7-flash:latest

References

  1. OpenCode Installation - https://opencode.ai/
  2. Ollama Installation - https://ollama.com/
  3. Open-WebUI Installation - https://github.com/open-webui/open-webui
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment