This project provides a local LLM setup using Docker Compose with Ollama and Open WebUI.
The docker-compose.yaml file defines a complete LLM and chat interface stack with two main services:
-
Ollama - Local AI model server
- Runs the
ollama/ollama:latestimage - Automatically downloads and serves the
phi4-minimodel (1.7GB) - Exposes port 11434 for API access
- Stores models and data in persistent volumes
- Runs the
-
Open WebUI - Web-based chat interface
- Runs the
ghcr.io/open-webui/open-webui:mainimage - Provides a user-friendly web interface for chatting with the AI
- Connects to the Ollama service for AI model access
- Exposes port 8080 for web access
- Runs the
- Lightweight: Uses the small phi4-mini model (only 1.7GB)
- Persistent Storage: Data and models are stored in Docker volumes
- Easy Access: Web interface available at http://localhost:8080
- Customizable: Easy to change models or ports by modifying the configuration
- Start the services:
docker-compose up -d - Open your browser and go to http://localhost:8080
- Start chatting with your AI assistant!
- To use a different model, modify the
commandin the ollama service - To change the web UI port, modify the port mapping in the open-webui service
- To prevent logouts after updates, you can set the
WEBUI_SECRET_KEYenvironment variable
docker-compose.yaml
services:
ollama:
container_name: ollama
image: ollama/ollama:latest
environment:
- LOG_LEVEL=debug
volumes:
- ollama:/root/.ollama
- models:/models
ports:
- "11434:11434"
networks:
- ollama-net
restart: unless-stopped
entrypoint: ["/bin/sh", "-c"]
command: ["ollama serve & sleep 5 && ollama pull phi4-mini && wait"]
open-webui:
container_name: open-webui
image: ghcr.io/open-webui/open-webui:main
environment:
- MODEL_DOWNLOAD_DIR=/models
- OLLAMA_API_BASE_URL=http://ollama:11434
- OLLAMA_API_URL=http://ollama:11434
- LOG_LEVEL=debug
- TMPDIR=/tmp/open-webui-tmp
volumes:
- ./webui-tmp:/tmp/open-webui-tmp
- data:/data
- models:/models
- open-webui:/app/backend/data
ports:
- "8080:8080"
depends_on:
- ollama
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
- ollama-net
restart: unless-stopped
volumes:
models:
ollama:
data:
open-webui:
networks:
ollama-net:
driver: bridge
Considering a fresh linux-based machine, it could be done following the next steps:
# mkdir -p /app
# cd /app
Installing ollama (more info: https://www.server-world.info/en/note?os=Debian_12&p=ollama&f=1)
# wget https://ollama.ai/install.sh
# chmod +x install.sh
# ./install.sh
Installing open-webui (more info: https://pahautelman.github.io/pahautelman-blog/tutorials/build-your-local-ai/build-your-local-ai/)
# apt install python3.11 python3-pip python3.11-venv
# python3 -m pip install open-webui
Adding a model
# ollama run phi4-mini
Starting open-webui as a service
# touch /etc/systemd/system/open-webui.service
# chmod 664 /etc/systemd/system/open-webui.service
# vim /etc/systemd/system/open-webui.service
open-webui.service content
[Unit]
Description=Open-WebUI
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/app
ExecStart=/app/env/bin/open-webui serve
Restart=always
[Install]
WantedBy=multi-user.target
Enabling and starting the service
# systemctl daemon-reload
# systemctl enable open-webui.service
# systemctl start open-webui.service
# systemctl status open-webui.service