This Gist provides a Docker Compose setup tailored for running the LiteLLM Proxy on a ZimaOS host system. It utilizes a PostgreSQL database for persistent storage of keys, usage data, and configurations, managed via Dockge
Host System Context:
- Operating System: ZimaOS (an immutable Linux system built with Buildroot) - although this should work on any other Linux OS.
- Path Convention: ZimaOS commonly uses
/DATA/AppData/
for persistent application data. - Orchestration: While this setup is managed using Dockge, the instructions primarily use standard
docker compose
terminal commands for clarity and wider applicability. Dockge users can adapt these steps using the Dockge UI (e.g., editing compose files, setting environment variables, deploying stacks).
This configuration includes:
- LiteLLM Proxy service.
- PostgreSQL 16 database service.
- Volume persistence for database data using ZimaOS paths.
- Configuration via
config.yaml
mounted from the host. - Environment variable management using a
.env
file (managed by Dockge or manually). - Example configuration for OpenAI (GPT-4o, GPT-3.5-Turbo) and XAI (Grok-1) models.
- ZimaOS Host: Assumes you are running this on ZimaOS.
- Docker & Docker Compose: These are typically pre-installed and managed on ZimaOS, accessible via the terminal or through tools like Dockge.
- Dockge: (Recommended) The web UI used to manage Docker Compose stacks on ZimaOS.
- API Keys: You will need API keys for the LLM providers you intend to use (e.g., OpenAI, XAI).
- Directory for Configuration: A directory on your ZimaOS host where the
config.yaml
will be stored persistently (e.g.,/DATA/AppData/litellm/
).
Dockge typically manages stack files within its own directory structure, often under /DATA/Dockge/compose/<stack_name>/
. When editing via Dockge, you'll modify the compose.yaml
and environment variables within its UI. The config.yaml
file, however, needs to be placed manually in its persistent location on the host.
compose.yaml
: Defines the Docker services, networks, and volumes. Managed within Dockge or placed in your stack directory if running manually..env
/ Environment Variables: Stores secrets like API keys, passwords, and paths. Managed within Dockge's Environment Variable section or placed as a.env
file in the stack directory if running manually. Do not commit sensitive keys to public repositories.config.yaml
: The LiteLLM configuration file. Place this manually in the location specified byLITELLM_CONFIG_PATH
(e.g.,/DATA/AppData/litellm/config.yaml
).
Example layout if managed manually:
/home/user/litellm-stack/ # Manual stack directory
├── compose.yaml
└── .env
Example layout if managed by Dockge:
/DATA/AppData/dockge/stacks/litellm-stack/ # Dockge's internal stack directory
├── compose.yaml # will be created by using the Dockge UI
└── .env # will be created by using the Dockge UI
Configure these variables within Dockge's "Environment Variables" section for the stack, or create a .env
file if managing manually. Replace placeholder values (<your_key>
, <your_password>
, your_port
>) with your actual secrets and desired settings.
# LiteLLM-specific variables
PGID=0 # See Note 1 (Permissions)
PUID=0 # See Note 1 (Permissions)
TZ=Europe/Berlin # Set your timezone (e.g., America/New_York)
WEB_PORT=<your_port> # Host port to access LiteLLM UI/API
LITELLM_CONFIG_PATH=/DATA/AppData/litellm # ZimaOS host path to the directory containing config.yaml (See Note 2)
# Postgres-specific variables
POSTGRES_DB=litellm # Database name
POSTGRES_USER=llmproxy # Database username
POSTGRES_PASSWORD=<your_password> # CHANGE THIS to a strong database password
DB_PORT=<your_port> # Host port to expose the database (optional, only if needed externally)
DATABASE_URL=postgresql://llmproxy:<your_password>:<your_port>/litellm # Full connection string (update user/password if changed)
# Config related
LITELLM_MASTER_KEY=<your_key> # CHANGE THIS key used to authenticate UI/Admin API requests
LITELLM_SALT_KEY=<your_key> # CHANGE THIS key used for encrypting/decrypting stored API keys
OPENAI_API_KEY=<your_key> # CHANGE THIS to your OpenAI API key
XAI_API_KEY=<your_key> # CHANGE THIS to your XAI API key
STORE_MODEL_IN_DB=True # Set to True to store cost/model info in DB
Notes on Environment Variables:
- Permissions (PUID/PGID): I discovered that the
ghcr.io/berriai/litellm:main-latest
container runs its main process as root (UID
is0
/GID
is0
)** on this setup. Therefore, settingPUID
/PGID
to1000
(or any other non-root user) has no effect on the running container process. I set them to0
here purely for documentation clarity, acknowledging the container runs as root. This means the container process has root privileges when interacting with mounted volumes. See the "Improvements - Security" section. - LITELLM_CONFIG_PATH: This must be the path on the ZimaOS host where your
config.yaml
file resides. Ensure this directory exists before deploying the stack.
Use this content for the compose.yaml definition within Dockge or in your manual file.
# Dockge typically manages the stack name automatically.
# The 'name:' directive can usually be omitted when using Dockge.
# name: litellm-stack
services:
litellm:
# Consider using a specific version tag (e.g., :v1.66.0-stable) for stability
image: ghcr.io/berriai/litellm:main-latest
container_name: litellm # Optional: Assigns a fixed name
ports:
- ${WEB_PORT}:4000 # Maps host port from env vars to container port 4000
volumes:
# Mounts the config file from the ZimaOS host path specified by LITELLM_CONFIG_PATH
# into the container at /app/config.yaml
- ${LITELLM_CONFIG_PATH}/config.yaml:/app/config.yaml
environment:
# Pass environment variables set in Dockge UI or .env file
- PGID=${PGID} # Currently ignored by image (runs as root)
- PUID=${PUID} # Currently ignored by image (runs as root)
- TZ=${TZ}
- LITELLM_MASTER_KEY=${LITELLM_MASTER_KEY}
- LITELLM_SALT_KEY=${LITELLM_SALT_KEY}
- OPENAI_API_KEY=${OPENAI_API_KEY} # Passed for config.yaml lookup
- XAI_API_KEY=${XAI_API_KEY} # Passed for config.yaml lookup
- DATABASE_URL=${DATABASE_URL} # Passed for config.yaml lookup
- STORE_MODEL_IN_DB=${STORE_MODEL_IN_DB} # Passed for config.yaml lookup
# LITELLM_CONFIG_PATH inside container (used by --config flag)
- LITELLM_CONFIG_PATH=/app/config.yaml
command:
# Explicitly tell LiteLLM where to find the config file inside the container
- --config
- /app/config.yaml
depends_on:
db: # Ensures 'db' service is healthy before starting 'litellm'
condition: service_healthy
healthcheck: # Checks if the LiteLLM proxy is responsive
test: ["CMD", "curl", "-f", "http://localhost:4000/health/liveliness"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
restart: unless-stopped # Optional: Container restart policy
db:
image: postgres:16 # Use PostgreSQL version 16
container_name: litellm_db # Optional: Assigns a fixed name
restart: always # Ensures database restarts automatically
environment:
# Database credentials sourced from env vars
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
ports:
# Optionally map host DB_PORT to container port 5432 for external access
- ${DB_PORT}:5432
volumes:
# Mounts a named volume for persistent PostgreSQL data
- postgres_data:/var/lib/postgresql/data
healthcheck: # Checks if the database is ready to accept connections
test: ["CMD", "pg_isready", "-d", "${POSTGRES_DB}", "-U", "${POSTGRES_USER}"]
interval: 5s
timeout: 5s
retries: 10
start_period: 5s
volumes:
postgres_data: # Defines the named volume used by the 'db' service
name: litellm_postgres_data # Explicitly names the volume on the host
# networks: {} # Default bridge network is usually sufficient
Create a file named config.yaml
in the ZimaOS host directory specified by LITELLM_CONFIG_PATH
(e.g., /DATA/AppData/litellm/config.yaml
).
# LiteLLM Configuration File
# Documentation: https://docs.litellm.ai/docs/proxy/configs
# model_list: Defines the LLMs LiteLLM can proxy to.
model_list:
- model_name: gpt-4o # Alias used in API calls to LiteLLM
litellm_params: # Backend connection parameters
model: openai/gpt-4o # LiteLLM identifier for the OpenAI model
api_key: os.environ/OPENAI_API_KEY # Reads key from OPENAI_API_KEY env variable
- model_name: gpt-3.5-turbo
litellm_params:
model: openai/gpt-3.5-turbo
api_key: os.environ/OPENAI_API_KEY # Can use the same env var
- model_name: grok-1 # Alias for XAI Grok model
litellm_params:
model: xai/grok-1 # LiteLLM identifier for the XAI model
api_key: os.environ/XAI_API_KEY # Reads key from XAI_API_KEY env variable
# litellm_settings: Configures the LiteLLM proxy server itself.
litellm_settings:
# Database connection string (reads value from DATABASE_URL env variable)
database_url: os.environ/DATABASE_URL
# Master key for proxy authentication (reads value from LITELLM_MASTER_KEY env variable)
master_key: os.environ/LITELLM_MASTER_KEY
# Store model information (like cost data) in the database.
store_model_in_db: True # Reads value from STORE_MODEL_IN_DB env variable (True/False)
# Optional: Enable the LiteLLM UI (usually enabled by default)
# enable_ui: True
# general_settings: General proxy-wide settings (optional)
general_settings: {} # Represents an empty dictionary, required even if no settings are added.
# Example: Set a default logging level (DEBUG, INFO, WARNING, ERROR)
# log_level: INFO
- Create Host Configuration Directory:
- Open a terminal on your ZimaOS (e.g., via SSH).
- Create the directory specified in
LITELLM_CONFIG_PATH
if it doesn't exist:sudo mkdir -p /DATA/AppData/litellm
- Set Permissions: Since the container runs as root, it can write anywhere, but it's good practice to set ownership if you plan to edit files as your regular user. Use
sudo chown <your_user_id>:<your_group_id> /DATA/AppData/litellm
or simply ensure your user has write access if needed. (Note:chown 1000:1000 is common but check your actual user ID with id if unsure
). The container running as root will still function.
- Place config.yaml: Copy the content of the
config.yaml
file provided above into/DATA/AppData/litellm/config.yaml
on your ZimaOS host. Customize the models as needed. - Configure Stack in Dockge:
- Open the Dockge web UI.
- Create a new stack (e.g., named
litellm-stack
). - Paste the
compose.yaml
content into the "Compose" editor. - Go to the "Environment Variables" section for the stack.
- Add each variable from the
.env
section above (e.g.,PGID
,PUID
,TZ
,WEB_PORT
,LITELLM_CONFIG_PATH
,POSTGRES_DB
,POSTGRES_USER
,POSTGRES_PASSWORD
,DB_PORT
,DATABASE_URL
,LITELLM_MASTER_KEY
,LITELLM_SALT_KEY
,OPENAI_API_KEY
,XAI_API_KEY
,STORE_MODEL_IN_DB
). - Crucially, replace placeholder values (passwords, API keys, master/salt keys) with your actual secrets.
- Deploy the Stack: Click the "Deploy" button in Dockge. Dockge will execute the equivalent of
docker compose up -d
. - Verify (using Terminal): You can still use the terminal to check logs:
- Check litellm logs:
sudo docker logs litellm
- Check database logs:
sudo docker logs litellm_db
(as defined incompose.yaml
)
- Check litellm logs:
- Proxy Endpoint (e.g. to use with Open WebUI): Send OpenAI-compatible API requests to
http://<your-zima-ip>:${WEB_PORT} (e.g., http://192.168.1.100:4000)
. Remember to include the Authorization: Bearer<YOUR_LITELLM_MASTER_KEY_or_Virtual_Key>
header. - Web UI: Access the Admin UI at
http://<your-zima-ip>:${WEB_PORT}/ui
. Log in using theLITELLM_MASTER_KEY
initially.
- Security - Secrets: Considering Docker Secrets if managing manually outside Dockge for better security than plain text
env
vars. Dockge's environment variable management is generally secure within its context. - Security - Container User:
- Current State: The
litellm
container runs as root. This is a security concern. - Ideal Solution: Ideally, use or build an image that runs LiteLLM as a non-root user. This would require changes to the Dockerfile and potentially the entrypoint script.
- Current State: The
- Image Tag Stability: Use specific version tags in
compose.yaml
instead of:main-latest
for predictable deployments. - Resource Limits: Define resource limits in
compose.yaml
using thedeploy.resources key
to prevent resource exhaustion on ZimaOS. - Database Backups: Regularly back up the
litellm_postgres_data
Docker volume and the/DATA/AppData/litellm/config.yaml
file. - Networking: The default Docker network is used. Defining custom networks if needed for isolation or advanced connectivity.
- Checking Dockge logs: Using the Log viewer within the Dockge UI for the litellm and db services.
- Checking terminal logs:
sudo docker logs litellm
,sudo docker logs litellm_db
. - Validating YAML: Copy/pasting
compose.yaml
andconfig.yaml
content into an online YAML validator (do not copy passwords etc. into the validator). - Check Ports: Ensure
WEB_PORT
(4000
) is not blocked or used by another service on ZimaOS. - Checking config.yaml path/permissions: Ensure
LITELLM_CONFIG_PATH
in environment variables points to the correct host directory whereconfig.yaml
exists. Ensure the file is readable (though root can read most things). - Checking environment variables (Terminal): Verify inside the container:
sudo docker exec litellm env
.