Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save BretFisher/aafd46eeb7acef2f5ef7d1ea70abe2ad to your computer and use it in GitHub Desktop.
Save BretFisher/aafd46eeb7acef2f5ef7d1ea70abe2ad to your computer and use it in GitHub Desktop.
Use Open WebUI with Docker Model Runner and Compose

How to use this compose file to run Open WebUI on a local LLM running with Docker Model Runner

  1. Enable Docker Model Runner (v4.40 or newer) in Settings or run the command:
    • docker desktop enable model-runner --no-tcp
  2. Download some models from https://hub.docker.com/u/ai (or let the compose file below pull one for you)
    • docker model pull ai/qwen2.5:0.5B-F16
    • docker model pull ai/smollm2:latest
    • Be sure to only download models that you have the VRAM to run :)
  3. Run the compose.yaml here to startup the Open WebUI on port 3000
    • you can run my published compose file directly (without needing to save the YAML locally) with docker compose -f oci://bretfisher/openwebui up
  4. Create an admin user and login at http://localhost:3000

More info: My YouTube Short on getting started: https://youtube.com/shorts/DRbLUL50-wU My YouTube full details video: https://www.youtube.com/watch?v=3p2uWjFyI1U Docker Docs: https://docs.docker.com/compose/how-tos/model-runner/

# This is a Docker Compose file for running the Open WebUI with a specific model.
# It uses the new provider feature of Compose to specify the model to be downloaded.
# Note that Open WebUI lets you select any downloaded model, but it won't auto-download them
# so the provider service will ensure it's downloaded first.
# https://docs.docker.com/compose/how-tos/model-runner/
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
ports:
- "3000:8080"
environment:
- OPENAI_API_BASE_URL=http://model-runner.docker.internal:80/engines/llama.cpp/v1
- OPENAI_API_KEY=na
volumes:
- open-webui:/app/backend/data
depends_on:
- ai-runner
ai-runner:
provider:
type: model
options:
model: ai/gemma3-qat:1B-Q4_K_M # Quantized. Neesds needs 1GB of GPU memory
# model: ai/gemma3:4B-F16 # needs at leats 8GB of GPU memory
# https://hub.docker.com/r/ai/gemma3
volumes:
open-webui:
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment