This is a quick copy-paste-observe guide for people in a hurry to quickly set up Open-WebUI such that one can run and access a local Large Language Model (LLM) as a web application similar to OpenAI's ChatGPT.
- Install Ollama using the command below:
curl -fsSL https://ollama.com/install.sh | sh- Install Open-WebUI via docker using the command below:
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:mainIf you wish to utilise a Nvidia GPU, use the command below instead:
docker run -d -p 3000:8080 --gpus all --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:cuda- Download ollama LLM model by using the command below:
ollama pull <model_name>
#Eg. ollama pull qwen3:8bWarning
Note that, if you use any models with the cloud tag, it means it would not be fully local as it would be using Ollama's cloud models.
- Access Open-WebUI via its dashboard:
Dashboard Link: http://localhost:3000
Note
Note that you can access it from an external network device via http://<SERVER_IP_ADDR>:3000.
- Ollama Installation - https://ollama.com/
- Open-WebUI Installation - https://github.com/open-webui/open-webui