Skip to content

Instantly share code, notes, and snippets.

@cardboardcode
Last active March 17, 2026 04:44
Show Gist options
  • Select an option

  • Save cardboardcode/aea1dac0ee1fab14cb2a17b5875c6d09 to your computer and use it in GitHub Desktop.

Select an option

Save cardboardcode/aea1dac0ee1fab14cb2a17b5875c6d09 to your computer and use it in GitHub Desktop.
For People In A Hurry: How to Set Up Open-WebUI with Ollama as free, local and privacy-focused alternative to ChatGPT

What Is This?

This is a quick copy-paste-observe guide for people in a hurry to quickly set up Open-WebUI such that one can run and access a local Large Language Model (LLM) as a web application similar to OpenAI's ChatGPT.

Build

  1. Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh
  1. Install Open-WebUI via docker using the command below:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

If you wish to utilise a Nvidia GPU, use the command below instead:

docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda
  1. Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8b

Warning

Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.

  1. Access Open-WebUI via its dashboard:

Dashboard Link: http://localhost:3000

Note

Note that you can access it from an external network device via http://<SERVER_IP_ADDR>:3000.

References

  1. Ollama Installation - https://ollama.com/
  2. Open-WebUI Installation - https://github.com/open-webui/open-webui
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment