Skip to content

Instantly share code, notes, and snippets.

@cardboardcode
Last active April 10, 2026 06:15
Show Gist options
  • Select an option

  • Save cardboardcode/5282e7e794b6d7757d4c8c983167f5db to your computer and use it in GitHub Desktop.

Select an option

Save cardboardcode/5282e7e794b6d7757d4c8c983167f5db to your computer and use it in GitHub Desktop.
For People In A Hurry: How to Set Ollama with Claude Code for Local & Privacy-Focused Coding Agent

What Is This?

This is a quick copy-paste-observe guide for people in a hurry to quickly set up a local coding agent at no additional costs using OpenCode and Claude Code.

Build

  1. Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh
  1. Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8b

Warning

Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.

Warning

Note that, only certain LLMs support tooling needed by Claude Code. You can go with the following recommended local models: glm-4.7-flash - 19GB qwen3:8b - 11B

  1. Install Claude Code using the command below:
curl -fsSL https://claude.ai/install.sh | bash

Note

In terms of higher privacy, OpenCode is potentially a better fit due to Claude Code's limitation of web search only allowed for paid Claude accounts. Please refer to this gist on how to set up with OpenCode instead.

Run

ollama launch claude

For ease of use, you can launch Claude Code with models pre-defined to skip the manual input:

ollama launch claude --model glm-4.7-flash:latest

Warning

To allow full autonomy of coding agent in order to speed up development, use the following bash command at your own risk:

ollama launch claude --model glm-4.7-flash:latest -- --dangerously-skip-permissions

References

  1. Claude Code Installation - https://code.claude.com/docs/en/quickstart
  2. Ollama Installation - https://ollama.com/
  3. Open-WebUI Installation - https://github.com/open-webui/open-webui
@cardboardcode
Copy link
Copy Markdown
Author

cardboardcode commented Mar 25, 2026

⚠️ Low on Memory Storage?

If you are running low on memory storage space, you can use an external SSD storage or optical hard drive after setting a new path for OLLAMA_MODELS:

  1. Access the ollama .service file:
sudo nano -S /etc/systemd/system/ollama.service
  1. Add the following line:
Environment="OLLAMA_MODELS=/your/desired/path"

#Eg.
#Environment="OLLAMA_MODELS=/mnt/sdb1"
  1. Restart the ollama daemon:
systemctl daemon-reload
systemctl restart ollama

References

  1. How to set a new directory to store Ollama models - https://www.reddit.com/r/ollama/comments/1c4zg15/does_anyone_know_how_to_change_where_your_models/

@cardboardcode
Copy link
Copy Markdown
Author

⚠️ Claude Code Not Writing/Reading Files?

This usually happens when the model context window is too small, leading to instructions being cut off and leaving the model "blind" to its file-handling capabilities.

To rectify this, we can increase the context window size via following the steps below:

  1. Create a Modelfile file with the following contents:
FROM <BASE_MODEL_NAME> 
PARAMETER num_ctx 65536 

BASE_MODEL_NAME should be one of the models available when you run ollama list.

#Eg.
FROM gemma4:26b 
PARAMETER num_ctx 65536 
  1. Create the new model:
#Eg.
ollama create <NEW_MODEL_NAME> -f Modelfile
#Eg.
ollama create gemma4-64k -f Modelfile
  1. Run Claude Code with the new model with increased context window size:
ollama launch claude --model <NEW_MODEL_NAME> -- --dangerously-skip-permissions
#Eg.
ollama launch claude --model gemma4-64k -- --dangerously-skip-permissions

✔️ Verify

Verify that Claude Code can now read and write files using the following test prompt:

Write me a simple README.md file that has "Hello World" in it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment