This guide details containerizing and running ComfyUI with Docker on Linux and Windows (including WSL2). It covers mounting local directories, best practices for I/O performance, and an Ollama integration section.
- Pre-requisites
- Selecting the Appropriate Docker Image
- Linux: Running ComfyUI in Docker
- Windows: Running ComfyUI in Docker
- Optional: Configure WSL2 Memory (Windows Only)
- Optional: Mounting External Drives (Models/Output)
- Ollama Integration with ComfyUI
- Docker Desktop installed.
- NVIDIA drivers + NVIDIA Container Toolkit on Linux. (For Windows using WSL2, Docker Desktop manages GPU support—no separate toolkit required.)
- Full CUDA toolkit installation is unnecessary.
Default: NVIDIA GPU image yanwk/comfyui-boot:cu124-slim
. For other GPUs, consult the available tags.
mkdir -p comfy
docker run -it \
--name comfyui-cu124 \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/comfy:/root \
-e CLI_ARGS="" \
yanwk/comfyui-boot:cu124-slim
Access: http://localhost:8188/
New-Item -ItemType Directory -Force -Path comfy
docker run -it `
--name comfyui-cu124 `
--gpus all `
-p 8188:8188 `
-v "${PWD}\comfy:/root" `
-e CLI_ARGS="" `
yanwk/comfyui-boot:cu124-slim
Access: http://localhost:8188/
Increase available RAM for WSL2:
- Edit
%USERPROFILE%\.wslconfig
:[wsl2] memory=56GB
- Shutdown WSL2:
wsl --shutdown
- Restart Docker Desktop.
Note: Bind mounts from Windows/NTFS or external drives incur slow I/O due to filesystem translation. For optimal performance, use Docker volumes or store files in the native Linux filesystem.
docker run -it `
--name comfyui-cu124 `
--gpus all `
-p 8188:8188 `
-v "${PWD}\comfy:/root" `
-v "D:/AI/models:/root/ComfyUI/models" `
-v "D:/AI/output:/root/ComfyUI/output" `
-e CLI_ARGS="" `
yanwk/comfyui-boot:cu124-slim
docker run -it \
--name comfyui-cu124 \
--gpus all \
-p 8188:8188 \
-v "$(pwd)"/comfy:/root \
-v "/mnt/AI/models:/root/ComfyUI/models" \
-v "/mnt/AI/output:/root/ComfyUI/output" \
-e CLI_ARGS="" \
yanwk/comfyui-boot:cu124-slim
For advanced model execution, integrate with ComfyUI Ollama. Steps:
-
Deploy Ollama:
Run Ollama in Docker using a dedicated volume (e.g.,ollama_data
).
Avoid bind mounts on Windows as they yield poor I/O performance. -
Migrate Existing Data (if needed):
Use a tar archive to move data to the volume:docker run --rm ` -v ollama_data:/volume ` -v "E:/backup:/backup" ` alpine sh -c "cd /volume && tar czf /backup/ollama_data_backup.tar.gz ."
Restore by extracting into the volume.
-
Launch the ComfyUI Ollama Container:
Mount theollama_data
volume:docker run -it ` --name comfyui-ollama ` --gpus all ` -p 8189:8189 ` -v "${PWD}\comfy:/root" ` -v ollama_data:/root/.ollama ` -e CLI_ARGS="" ` your-ollama-image
-
Access & Configure:
Navigate tohttp://localhost:8189/
and follow the instructions on the ComfyUI Ollama GitHub repo.
Resources: