Skip to content

Instantly share code, notes, and snippets.

@msaroufim
Created September 25, 2025 23:31
Show Gist options
  • Save msaroufim/d330c459bca71d234b40a91d17d8ba51 to your computer and use it in GitHub Desktop.
Save msaroufim/d330c459bca71d234b40a91d17d8ba51 to your computer and use it in GitHub Desktop.
Local LLM Service Setup with Qwen3, Ollama, Open WebUI, and Tailscale

Local LLM Service Setup

1. Install Ollama

curl -fsSL https://ollama.com/install.sh | sh

2. Download Model

ollama pull qwen3:32b

3. Install Open WebUI

docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main

4. Expose via Tailscale

sudo tailscale serve --bg 8080

Access

  • From Tailscale network: http://YOUR_MACHINE_NAME:8080
  • Chat interface like ChatGPT, no API calls needed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment