Skip to content

Instantly share code, notes, and snippets.

@ph33nx
Last active March 20, 2025 09:26
Show Gist options
  • Save ph33nx/fb8d0b79a68fc117dd765f046fd615d5 to your computer and use it in GitHub Desktop.
Save ph33nx/fb8d0b79a68fc117dd765f046fd615d5 to your computer and use it in GitHub Desktop.
This quickstart guide shows you how to containerize and run ComfyUI using Docker on both Linux and Windows (including WSL, CMD, and PowerShell). It provides step-by-step instructions to create a local "comfy" directory and launch the Docker container with NVIDIA GPU support. Ideal for users looking to deploy ComfyUI in a containerized environmen…

ComfyUI Docker Quickstart

This guide details containerizing and running ComfyUI with Docker on Linux and Windows (including WSL2). It covers mounting local directories, best practices for I/O performance, and an Ollama integration section.

Table of Contents

Pre-requisites

  • Docker Desktop installed.
  • NVIDIA drivers + NVIDIA Container Toolkit on Linux. (For Windows using WSL2, Docker Desktop manages GPU support—no separate toolkit required.)
  • Full CUDA toolkit installation is unnecessary.

Selecting the Appropriate Docker Image

Default: NVIDIA GPU image yanwk/comfyui-boot:cu124-slim. For other GPUs, consult the available tags.

Linux: Running ComfyUI in Docker

mkdir -p comfy
docker run -it \
  --name comfyui-cu124 \
  --gpus all \
  -p 8188:8188 \
  -v "$(pwd)"/comfy:/root \
  -e CLI_ARGS="" \
  yanwk/comfyui-boot:cu124-slim

Access: http://localhost:8188/

Windows: Running ComfyUI in Docker

New-Item -ItemType Directory -Force -Path comfy
docker run -it `
  --name comfyui-cu124 `
  --gpus all `
  -p 8188:8188 `
  -v "${PWD}\comfy:/root" `
  -e CLI_ARGS="" `
  yanwk/comfyui-boot:cu124-slim

Access: http://localhost:8188/

Optional: Configure WSL2 Memory (Windows Only)

Increase available RAM for WSL2:

  1. Edit %USERPROFILE%\.wslconfig:
    [wsl2]
    memory=56GB
  2. Shutdown WSL2:
    wsl --shutdown
  3. Restart Docker Desktop.

Optional: Mounting External Drives (Models/Output)

Note: Bind mounts from Windows/NTFS or external drives incur slow I/O due to filesystem translation. For optimal performance, use Docker volumes or store files in the native Linux filesystem.

Windows

docker run -it `
  --name comfyui-cu124 `
  --gpus all `
  -p 8188:8188 `
  -v "${PWD}\comfy:/root" `
  -v "D:/AI/models:/root/ComfyUI/models" `
  -v "D:/AI/output:/root/ComfyUI/output" `
  -e CLI_ARGS="" `
  yanwk/comfyui-boot:cu124-slim

Linux

docker run -it \
  --name comfyui-cu124 \
  --gpus all \
  -p 8188:8188 \
  -v "$(pwd)"/comfy:/root \
  -v "/mnt/AI/models:/root/ComfyUI/models" \
  -v "/mnt/AI/output:/root/ComfyUI/output" \
  -e CLI_ARGS="" \
  yanwk/comfyui-boot:cu124-slim

Ollama Integration with ComfyUI

For advanced model execution, integrate with ComfyUI Ollama. Steps:

  1. Deploy Ollama:
    Run Ollama in Docker using a dedicated volume (e.g., ollama_data).
    Avoid bind mounts on Windows as they yield poor I/O performance.

  2. Migrate Existing Data (if needed):
    Use a tar archive to move data to the volume:

    docker run --rm `
      -v ollama_data:/volume `
      -v "E:/backup:/backup" `
      alpine sh -c "cd /volume && tar czf /backup/ollama_data_backup.tar.gz ."

    Restore by extracting into the volume.

  3. Launch the ComfyUI Ollama Container:
    Mount the ollama_data volume:

    docker run -it `
      --name comfyui-ollama `
      --gpus all `
      -p 8189:8189 `
      -v "${PWD}\comfy:/root" `
      -v ollama_data:/root/.ollama `
      -e CLI_ARGS="" `
      your-ollama-image
  4. Access & Configure:
    Navigate to http://localhost:8189/ and follow the instructions on the ComfyUI Ollama GitHub repo.


Resources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment