Skip to content

Instantly share code, notes, and snippets.

@AndrewAltimit
Last active July 20, 2025 17:33
Show Gist options
  • Save AndrewAltimit/f2a21b1a075cc8c9a151483f89e0f11e to your computer and use it in GitHub Desktop.
Save AndrewAltimit/f2a21b1a075cc8c9a151483f89e0f11e to your computer and use it in GitHub Desktop.
ComfyUI MCP Server (local image/video generation)

ComfyUI MCP Server (local image/video generation)

Warning: Requires a powerful GPU!

A containerized ComfyUI setup with MCP (Model Context Protocol) integration for AI-driven image generation workflows. Includes both standard MCP (stdio) access and HTTP API access.

Note: This pairs well with the model training mcp used for creating checkpoints / loras

Features

  • Fully Containerized: ComfyUI and MCP server run in Docker containers
  • NVIDIA GPU Support: Full CUDA support for GPU acceleration
  • Persistent Storage: All models, outputs, and custom nodes are persisted via volume mounts
  • MCP Integration: AI models can create and submit workflows via MCP tools
  • HTTP MCP API: HTTP MCP API for easy integration with remote agents
  • Model Upload Support: Upload checkpoints and LoRAs with metadata via MCP
  • Template Workflows: Pre-configured Flux and Pony workflows
  • Node Discovery: Query available ComfyUI nodes and their parameters
  • Pre-installed Custom Nodes: Popular nodes like ComfyUI-Manager, Easy Use, GGUF, and more

Prerequisites

  • Docker and Docker Compose
  • NVIDIA GPU with Docker GPU support (nvidia-docker2)
  • At least 20GB free disk space for models

Quick Start

  1. Clone this repository

  2. Start the services:

    # Start ComfyUI and standard MCP server
    docker-compose up -d
    
    # Optional: Start HTTP API server for easier integration
    docker-compose up -d mcp-comfyui-http
  3. Access the services:

Directory Structure

comfyui-mcp/
├── docker-compose.yml      # Container orchestration
├── Dockerfile             # Container image definition
├── mcp_server.py          # MCP server implementation
├── mcp_http_server.py     # HTTP API wrapper
├── requirements.txt       # Python dependencies
├── flux_default.json      # FLUX workflow template
├── pony_default.json      # Pony workflow template
└── example_usage.py       # Example usage script

Note: All files are at the root level for GitHub Gist compatibility. When deployed:

  • Models are stored in Docker volumes at /comfyui/models/
  • Outputs are saved to /comfyui/output/
  • Inputs are read from /comfyui/input/

MCP Tools Available

Image Generation

generate-image

Generate images with simple parameters without needing to understand ComfyUI workflows.

Parameters:

  • prompt (required): Text description of what to generate
  • negative_prompt: What to avoid in the image
  • checkpoint: Model to use (default: "flux1-dev-fp8.safetensors")
  • lora: LoRA model to apply
  • lora_strength: LoRA influence strength (0.0-2.0, default: 1.0)
  • width: Image width in pixels (default: 1024)
  • height: Image height in pixels (default: 1024)
  • steps: Number of sampling steps (default: 25)
  • cfg: Classifier-free guidance scale (default: 3.5)
  • sampler: Sampling algorithm (default: "euler_ancestral")
  • seed: Random seed for reproducibility

Workflow Management

list-workflows

List all available workflow templates.

get-workflow

Retrieve a specific workflow template by name.

submit-workflow

Submit a custom ComfyUI workflow for execution.

validate-workflow

Check if a workflow is valid before submission.

Model Management

list-loras

List available LoRA models with optional search filtering.

get-lora-info

Get detailed information about a specific LoRA including metadata.

Metadata Support: The tool checks for metadata in these locations:

  1. {lora_name}.json
  2. {lora_name}.metadata.json
  3. {lora_base_name}.metadata.json

upload-lora

Upload a LoRA model with optional metadata (for files <100MB).

Parameters:

  • filename (required): Filename for the LoRA (must end with .safetensors)
  • content (required): Base64-encoded file content
  • metadata: Optional metadata object

upload-lora-chunked-start

Start a chunked upload session for large LoRA files.

Parameters:

  • upload_id (required): Unique identifier for the upload session
  • filename (required): Filename for the LoRA (must end with .safetensors)
  • total_size (required): Total file size in bytes
  • metadata: Optional metadata object

upload-lora-chunked-append

Append a chunk to an ongoing upload.

Parameters:

  • upload_id (required): Upload session identifier
  • chunk (required): Base64-encoded chunk content
  • chunk_index (required): Chunk sequence number (starting from 0)

upload-lora-chunked-finish

Finalize a chunked upload and save the LoRA.

Parameters:

  • upload_id (required): Upload session identifier

list-checkpoints

List all available checkpoint models.

upload-checkpoint

Upload a checkpoint model.

Parameters:

  • filename (required): Filename (must end with .safetensors, .ckpt, or .pt)
  • content (required): Base64-encoded file content

ComfyUI Node System

get-comfyui-nodes

List all available ComfyUI node types, optionally filtered by category.

get-node-info

Get detailed information about a specific ComfyUI node type.

System Information

get-generation-status

Check the status of a specific generation by prompt ID.

get-system-stats

Get ComfyUI system statistics and GPU information.

Output Management

list-outputs

List recently generated output images from ComfyUI.

Parameters:

  • max_items: Maximum number of outputs to list (default: 20)

Returns:

  • List of recent outputs with filenames, prompt IDs, and timestamps

Note: Server restarts clear the generation history, but output files remain on disk.

download-output

Download a generated output image from the remote ComfyUI server.

Parameters:

  • filename (required): The filename of the output image
  • subfolder: Optional subfolder path if the image is in a subdirectory
  • save_to: Optional local file path to save the downloaded image

Returns:

  • If save_to is specified: Saves to the MCP server's filesystem (not your local machine)
  • If save_to is not specified: Returns JSON with full base64-encoded image data

Example: Downloading from Remote Server

# Download and save locally from remote ComfyUI
response = await mcp_call("download-output", {
    "filename": "generated_image_00001_.png"
})

# Parse JSON response and decode base64
data = json.loads(response)
image_bytes = base64.b64decode(data['base64'])

# Save to local file
with open('local_image.png', 'wb') as f:
    f.write(image_bytes)

HTTP API Access

The MCP server can optionally be accessed via HTTP for easier integration.

Starting the HTTP Server

docker-compose up -d mcp-comfyui-http

HTTP Endpoints

  • GET / - API documentation
  • GET /health - Health check
  • POST /mcp/tool - Execute any MCP tool
  • GET /mcp/tools - List available tools
  • GET /models/loras - List LoRA models
  • GET /models/checkpoints - List checkpoints
  • GET /workflows - List workflow templates

HTTP Examples

Upload a LoRA

curl -X POST http://localhost:8189/mcp/tool \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "upload-lora",
    "arguments": {
      "filename": "my_lora.safetensors",
      "content": "base64_encoded_content_here",
      "metadata": {
        "name": "My LoRA",
        "trigger_words": ["my_style"]
      }
    }
  }'

Generate an Image

curl -X POST http://localhost:8189/mcp/tool \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "generate-image",
    "arguments": {
      "prompt": "a beautiful sunset over mountains",
      "width": 1024,
      "height": 1024
    }
  }'

Download an Output Image

# Download and get base64 data
curl -X POST http://localhost:8189/mcp/tool \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "download-output",
    "arguments": {
      "filename": "generated_image_00001_.png"
    }
  }' | jq -r '.result | fromjson | .base64' | base64 -d > image.png

Model Installation

  1. Checkpoints: Place .safetensors, .pt, or .ckpt files in ./models/checkpoints/
    • Recommended FLUX models: flux1-dev-fp8.safetensors, flux1-schnell.safetensors
  2. LoRAs: Place LoRA files in ./models/loras/
  3. VAEs: Place VAE files in ./models/vae/
  4. Other models: Use appropriate subdirectories in ./models/

FLUX Workflow Requirements

FLUX models require specific workflow configurations:

  • CFG Scale: Must be 1.0 (not the typical 7-8 used for SD models)
  • FluxGuidance Node: Required with guidance value (typically 3.5)
  • Efficient Sampler: Recommended to use KSampler Adv. (Efficient) from efficiency-nodes
  • Sampler: euler_ancestral with normal scheduler works well
  • Steps: 20-25 steps typically sufficient

LoRA Metadata Format

Create a .metadata.json file alongside your LoRA:

{
  "name": "My Style LoRA",
  "description": "Description of the LoRA",
  "trigger_words": ["trigger1", "trigger2"],
  "recommended_settings": {
    "strength": 0.7,
    "cfg_scale": 4.0
  },
  "example_prompts": [
    "example prompt using trigger1",
    "another example with trigger2"
  ]
}

Successful LoRA Integration Example

Successfully tested with AI Toolkit trained LoRAs:

  • Model: 113MB LoRA trained on custom dataset
  • Upload Method: Chunked upload with 256KB chunks (450 chunks total)
  • Upload Time: ~2 minutes for 113MB file
  • Generation: Works perfectly with FLUX workflows using trigger words

Configuration

Environment Variables

  • LOG_LEVEL: Set logging level (default: INFO)
  • COMFYUI_SERVER_URL: Override ComfyUI server URL
  • MCP_HTTP_PORT: HTTP API port (default: 8189)
  • CPU_ONLY: Set to "true" to run in CPU mode

GPU Configuration

The setup uses all available NVIDIA GPUs by default. To limit GPU usage, modify NVIDIA_VISIBLE_DEVICES in docker-compose.yml.

Pre-installed Custom Nodes

The following custom nodes are automatically installed:

  • ComfyUI-Manager: Node management and installation interface
  • Easy Use: Simplified workflow creation
  • GGUF: Support for GGUF model format
  • LoRA Manager: Enhanced LoRA model management
  • Efficiency Nodes: Performance optimized nodes
  • X-Flux: Flux model support and enhancements
  • Use Everywhere: Node connection simplification
  • Comfyroll: Additional creative nodes
  • ControlNet Aux: Preprocessing for ControlNet
  • Art Venture: Artistic style nodes

Building the Container

# Build all services
docker-compose build

Troubleshooting

Container won't start

  • Check NVIDIA Docker runtime: docker run --rm --gpus all nvidia/cuda:12.1.0-base-ubuntu22.04 nvidia-smi
  • Ensure ports 8188/8189 are not already in use

Models not loading

  • Check file permissions in the models directory
  • Ensure model files have correct extensions
  • Verify models appear when using list tools

MCP connection issues

  • Check logs: docker-compose logs mcp-comfyui-stdio
  • Ensure ComfyUI is healthy: docker-compose ps

HTTP API not accessible

  • Ensure mcp-comfyui-http is running: docker-compose ps mcp-comfyui-http
  • Check logs: docker-compose logs mcp-comfyui-http
  • Verify port 8189 is accessible

Large file uploads timing out

  • Files over ~100MB may timeout on HTTP upload (base64 encoding adds ~33% overhead)
  • Solution options:
    1. Use chunked upload (recommended) - split large files into smaller chunks
    2. Place files directly in ./models/loras/ or appropriate directory
    3. Use docker cp: docker cp model.safetensors comfyui-server:/comfyui/models/loras/
    4. Increase HTTP timeout settings in mcp_http_server.py

Chunked Upload Example

import uuid

# 1. Start upload session
upload_id = str(uuid.uuid4())
start_response = await mcp_call("upload-lora-chunked-start", {
    "upload_id": upload_id,
    "filename": "large_model.safetensors",
    "total_size": file_size,
    "metadata": {...}
})

# 2. Upload chunks (256KB recommended - larger chunks may fail)
CHUNK_SIZE = 256 * 1024  # 256KB chunks work reliably
for i in range(0, file_size, CHUNK_SIZE):
    chunk_data = file_data[i:i+CHUNK_SIZE]
    await mcp_call("upload-lora-chunked-append", {
        "upload_id": upload_id,
        "chunk": base64.b64encode(chunk_data).decode(),
        "chunk_index": i // CHUNK_SIZE
    })

# 3. Finalize upload
await mcp_call("upload-lora-chunked-finish", {
    "upload_id": upload_id
})

Important Notes:

  • 256KB chunks are recommended (becomes ~341KB after base64 encoding)
  • Larger chunks (1MB+) may fail with "Request Entity Too Large" errors
  • For a 113MB LoRA, expect ~450 chunks with 256KB size

Workflow Examples

FLUX Workflow Structure

For FLUX models, the proper node connection order is:

CheckpointLoaderSimple → LoraLoader → CLIPTextEncode → FluxGuidance → KSampler (Efficient) → SaveImage

Key settings for FLUX:

  • CFG in sampler: 1.0 (critical - not 7-8 like SD models)
  • FluxGuidance: 3.5 guidance value
  • Resolution: 1024x1024
  • Sampler: euler_ancestral
  • Steps: 20-25

Working with Remote Servers

When using ComfyUI MCP with a remote server:

  1. File Operations:

    • save_to parameter saves on the remote server, not locally
    • Use base64 responses to transfer data to your local machine
    • Large files (100MB+) require chunked upload with small chunks (256KB)
  2. History Management:

    • Server restarts clear generation history
    • Output files persist on disk even after restart
    • Use filenames directly if you know them
  3. Network Timeouts:

    • Increase timeouts for large file operations
    • FLUX model loading can take 30-60 seconds
    • Set HTTP client timeout to 300s for reliability

Performance Notes

  • GPU Memory: Flux models require ~10-12GB VRAM
  • Generation Time:
    • Flux (1024x1024): 30-60 seconds on RTX 4090
    • SDXL (1024x1024): 10-20 seconds on RTX 4090
  • Queue Management: Multiple requests are queued automatically
  • Model Loading: First generation after restart is slower due to model loading

Stopping the Services

# Stop all services
docker-compose down

# Stop and remove volumes (WARNING: deletes all models and outputs)
docker-compose down -v
# Exclude large files and directories
models/
output/
input/
temp/
custom_nodes/
logs/
*.safetensors
*.pt
*.ckpt
*.bin
*.pth
# Git files
.git/
.gitignore
# Documentation
README.md
docker-compose.override.yml.example
# OS and IDE files
.DS_Store
Thumbs.db
.vscode/
.idea/
*.swp
*.swo
# Python cache
__pycache__/
*.pyc
*.pyo
*.pyd
.Python
venv/
env/
# Build artifacts
*.log
.dockerignore
Dockerfile
# Model files (too large for gist)
models/
*.safetensors
*.ckpt
*.pt
*.pth
*.bin
# Output files
output/
*.png
*.jpg
*.jpeg
# Temporary files
temp/
*.tmp
*.temp
__pycache__/
*.pyc
*.pyo
# Test files
test_*.py
*_test.py
# Logs
logs/
*.log
# Environment
.env
.env.local
# OS files
.DS_Store
Thumbs.db
# Docker volumes
data/
volumes/
# Empty directories (for gist compatibility)
workflows/
custom_nodes/
input/
services:
comfyui:
build:
context: .
dockerfile: Dockerfile
args:
- http_proxy=${http_proxy:-}
- https_proxy=${https_proxy:-}
- no_proxy=${no_proxy:-}
network: host # Use host network during build for better DNS resolution
container_name: comfyui-mcp-server
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics
ports:
- "8188:8188"
volumes:
# Model directories - persist between container restarts
- ./models/checkpoints:/comfyui/models/checkpoints
- ./models/clip:/comfyui/models/clip
- ./models/clip_vision:/comfyui/models/clip_vision
- ./models/controlnet:/comfyui/models/controlnet
- ./models/diffusers:/comfyui/models/diffusers
- ./models/embeddings:/comfyui/models/embeddings
- ./models/gligen:/comfyui/models/gligen
- ./models/hypernetworks:/comfyui/models/hypernetworks
- ./models/loras:/comfyui/models/loras
- ./models/style_models:/comfyui/models/style_models
- ./models/unet:/comfyui/models/unet
- ./models/upscale_models:/comfyui/models/upscale_models
- ./models/vae:/comfyui/models/vae
- ./models/vae_approx:/comfyui/models/vae_approx
# Output and input directories
- ./output:/comfyui/output
- ./input:/comfyui/input
# Temp directory for processing
- ./temp:/comfyui/temp
# Optional: Share host DNS for better resolution (Linux)
# Uncomment if experiencing DNS issues
# - /etc/resolv.conf:/etc/resolv.conf:ro
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8188/system_stats"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
- comfyui-network
command: comfyui
mcp-comfyui-server:
build:
context: .
dockerfile: Dockerfile
args:
- http_proxy=${http_proxy:-}
- https_proxy=${https_proxy:-}
- no_proxy=${no_proxy:-}
network: host # Use host network during build for better DNS resolution
container_name: mcp-comfyui-stdio
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics
- COMFYUI_SERVER_URL=http://comfyui:8188
- MCP_PROJECT_ROOT=/workspace
- PYTHONPATH=/workspace/mcp_server
- LOG_LEVEL=${LOG_LEVEL:-INFO}
volumes:
# Share the same model directories (read-write for uploads)
- ./models/loras:/comfyui/models/loras
- ./models/checkpoints:/comfyui/models/checkpoints
- ./models/vae:/comfyui/models/vae
- ./models/embeddings:/comfyui/models/embeddings
- ./models/controlnet:/comfyui/models/controlnet
# Output directory (read-write for saving generated images)
- ./output:/comfyui/output
# Workflow templates
- ./workflows:/workspace/mcp_server/workflows:ro
# Logs
- ./logs:/workspace/logs
# Optional: Share host DNS for better resolution (Linux)
# Uncomment if experiencing DNS issues
# - /etc/resolv.conf:/etc/resolv.conf:ro
depends_on:
comfyui:
condition: service_healthy
stdin_open: true
tty: true
networks:
- comfyui-network
command: mcp
healthcheck:
test: ["CMD", "python3", "-c", "import mcp, websocket; print('OK')"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
# Optional: MCP HTTP API server
# Provides HTTP access to MCP tools
mcp-comfyui-http:
build:
context: .
dockerfile: Dockerfile
args:
- http_proxy=${http_proxy:-}
- https_proxy=${https_proxy:-}
- no_proxy=${no_proxy:-}
network: host
container_name: mcp-comfyui-http
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics
- COMFYUI_SERVER_URL=http://comfyui:8188
- MCP_HTTP_PORT=8189
- LOG_LEVEL=${LOG_LEVEL:-INFO}
ports:
- "8189:8189"
volumes:
# Share the same model directories (read-write for uploads)
- ./models/loras:/comfyui/models/loras
- ./models/checkpoints:/comfyui/models/checkpoints
- ./models/vae:/comfyui/models/vae
- ./models/embeddings:/comfyui/models/embeddings
- ./models/controlnet:/comfyui/models/controlnet
# Output directory
- ./output:/comfyui/output
# Workflow templates
- ./workflows:/workspace/mcp_server/workflows:ro
depends_on:
comfyui:
condition: service_healthy
networks:
- comfyui-network
command: mcp-http
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8189/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
networks:
comfyui-network:
driver: bridge
volumes:
models-cache:
driver: local
FROM nvidia/cuda:12.1.0-base-ubuntu22.04
# Install system dependencies
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
python3.10 \
python3-pip \
git \
wget \
curl \
unzip \
libgl1 \
libglib2.0-0 \
libsm6 \
libxext6 \
libxrender-dev \
libgomp1 \
libgoogle-perftools4 \
libtcmalloc-minimal4 \
ca-certificates \
&& rm -rf /var/lib/apt/lists/*
# Set working directory
WORKDIR /comfyui
# Clone ComfyUI
RUN git clone --depth 1 https://github.com/comfyanonymous/ComfyUI.git . && \
# Check what directories are present
echo "ComfyUI directories:" && ls -la && \
# Check for any web/app/static directories
find . -type d -name "web" -o -name "app" -o -name "static" | head -10 || true
# Install PyTorch first
RUN pip3 install --no-cache-dir \
torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# Install ComfyUI requirements
RUN pip3 install --upgrade pip && \
pip3 install --no-cache-dir -r requirements.txt
# Install MCP dependencies
RUN pip3 install --no-cache-dir \
mcp \
pydantic \
websocket-client \
aiohttp \
PyYAML
# Install custom nodes
WORKDIR /comfyui/custom_nodes
# Clone and install each custom node with depth 1 for faster cloning
# Clone each separately to avoid failures affecting all
RUN git clone --depth 1 https://github.com/yolain/comfyui-easy-use.git || true
RUN git clone --depth 1 https://github.com/city96/ComfyUI-GGUF.git || true
RUN git clone --depth 1 https://github.com/Suzie1/ComfyUI_lora_manager.git comfyui-lora-manager || true
RUN git clone --depth 1 https://github.com/ltdrdata/ComfyUI-Manager.git comfyui-manager || true
RUN git clone --depth 1 https://github.com/jags111/efficiency-nodes-comfyui.git || true
RUN git clone --depth 1 https://github.com/XLabs-AI/x-flux-comfyui.git || true
RUN git clone --depth 1 https://github.com/chrisgoringe/cg-use-everywhere.git || true
RUN git clone --depth 1 https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git || true
RUN git clone --depth 1 https://github.com/Fannovel16/comfyui_controlnet_aux.git || true
RUN git clone --depth 1 https://github.com/sipherxyz/comfyui-art-venture.git || true
# Install dependencies for each custom node that has requirements.txt
RUN for dir in */; do \
if [ -f "$dir/requirements.txt" ]; then \
echo "Installing requirements for $dir" && \
pip3 install --no-cache-dir -r "$dir/requirements.txt" || \
echo "Warning: Failed to install some requirements for $dir"; \
fi \
done
# Some custom nodes need additional setup
# ComfyUI-Manager needs specific permissions
RUN if [ -d "comfyui-manager" ]; then \
chmod -R 755 comfyui-manager; \
fi
# ControlNet Aux might need additional models
RUN if [ -d "comfyui_controlnet_aux" ]; then \
cd comfyui_controlnet_aux && \
# Install any additional dependencies
pip3 install --no-cache-dir opencv-python-headless controlnet-aux || true && \
cd ..; \
fi
WORKDIR /comfyui
# Create directories for models and outputs
RUN mkdir -p \
models/checkpoints \
models/clip \
models/clip_vision \
models/controlnet \
models/diffusers \
models/embeddings \
models/gligen \
models/hypernetworks \
models/loras \
models/style_models \
models/unet \
models/upscale_models \
models/vae \
models/vae_approx \
output \
input \
temp \
custom_nodes \
/workspace/mcp_server
# Copy MCP server files (all in flat structure for gist compatibility)
COPY mcp_server.py /workspace/mcp_server/
COPY mcp_http_server.py /workspace/mcp_server/
COPY flux_default.json /workspace/mcp_server/
COPY pony_default.json /workspace/mcp_server/
COPY requirements.txt /workspace/mcp_server/
# Install additional MCP server dependencies
WORKDIR /workspace/mcp_server
RUN pip3 install --no-cache-dir -r requirements.txt
# Expose ComfyUI port
EXPOSE 8188
# Set environment variables
ENV PYTHONPATH=/comfyui:/workspace/mcp_server
ENV COMFYUI_PATH=/comfyui
ENV NVIDIA_VISIBLE_DEVICES=all
ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility,graphics
# Create entrypoint script
RUN echo '#!/bin/bash\n\
if [ "$1" = "comfyui" ]; then\n\
# Check if CPU_ONLY environment variable is set\n\
if [ "$CPU_ONLY" = "true" ]; then\n\
cd /comfyui && python3 main.py --listen 0.0.0.0 --port 8188 --cpu\n\
else\n\
cd /comfyui && python3 main.py --listen 0.0.0.0 --port 8188\n\
fi\n\
elif [ "$1" = "mcp" ]; then\n\
cd /workspace/mcp_server && python3 mcp_server.py\n\
elif [ "$1" = "mcp-http" ]; then\n\
cd /workspace/mcp_server && python3 mcp_http_server.py\n\
else\n\
exec "$@"\n\
fi' > /entrypoint.sh && chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["comfyui"]
#!/usr/bin/env python3
"""
Example usage of the ComfyUI MCP server
This demonstrates how to interact with the MCP server programmatically
"""
import json
import asyncio
from pathlib import Path
# Note: In real usage, you would use the MCP client library
# This is a simplified example showing the expected inputs/outputs
async def example_generate_image():
"""Example: Generate an image using the simple interface"""
request = {
"tool": "generate-image",
"arguments": {
"prompt": "A serene Japanese garden with cherry blossoms, koi pond, highly detailed, anime style",
"negative_prompt": "low quality, blurry, bad anatomy",
"checkpoint": "flux1-dev-fp8.safetensors",
"lora": "anime_style.safetensors",
"lora_strength": 0.8,
"width": 1024,
"height": 1024,
"steps": 30,
"cfg": 4.0,
"sampler": "dpmpp_2m",
"seed": 12345
}
}
print("Generate Image Request:")
print(json.dumps(request, indent=2))
# Expected response format:
response = {
"type": "text",
"text": "Generated 1 image(s):\n- /comfyui/output/comfyui_20240115_120000_00001_.png"
}
print("\nExpected Response:")
print(json.dumps(response, indent=2))
async def example_custom_workflow():
"""Example: Submit a custom workflow"""
# Load a workflow template
workflow_path = Path("workflows/flux_default.json")
with open(workflow_path, 'r') as f:
workflow = json.load(f)
# Modify the workflow
workflow["4"]["inputs"]["text"] = "Hatsune Miku singing on stage, colorful lights"
workflow["6"]["inputs"]["width"] = 1920
workflow["6"]["inputs"]["height"] = 1080
request = {
"tool": "submit-workflow",
"arguments": {
"workflow": workflow
}
}
print("\n\nSubmit Workflow Request:")
print(json.dumps(request, indent=2))
# Expected response
response = {
"type": "text",
"text": "Workflow completed. Generated 1 image(s):\n- /comfyui/output/flux_00001_.png"
}
print("\nExpected Response:")
print(json.dumps(response, indent=2))
async def example_list_models():
"""Example: List available models"""
# List LoRAs
request = {
"tool": "list-loras",
"arguments": {
"search": "anime"
}
}
print("\n\nList LoRAs Request:")
print(json.dumps(request, indent=2))
# List checkpoints
request = {
"tool": "list-checkpoints",
"arguments": {}
}
print("\n\nList Checkpoints Request:")
print(json.dumps(request, indent=2))
async def example_node_info():
"""Example: Get information about ComfyUI nodes"""
# Get all nodes in a category
request = {
"tool": "get-comfyui-nodes",
"arguments": {
"category": "sampling"
}
}
print("\n\nGet Nodes by Category Request:")
print(json.dumps(request, indent=2))
# Get detailed info about a specific node
request = {
"tool": "get-node-info",
"arguments": {
"node_type": "KSampler"
}
}
print("\n\nGet Node Info Request:")
print(json.dumps(request, indent=2))
async def main():
"""Run all examples"""
print("ComfyUI MCP Server Usage Examples")
print("=" * 50)
await example_generate_image()
await example_custom_workflow()
await example_list_models()
await example_node_info()
print("\n\nNote: These are example requests. To actually execute them,")
print("you would use an MCP client connected to the running MCP server.")
if __name__ == "__main__":
asyncio.run(main())
{
"1": {
"_meta": {
"title": "Load Checkpoint"
},
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "flux1-dev-fp8.safetensors"
}
},
"2": {
"_meta": {
"title": "Load LoRA"
},
"class_type": "LoraLoader",
"inputs": {
"clip": ["1", 1],
"lora_name": "Inkpunk_Flux.safetensors",
"model": ["1", 0],
"strength_clip": 1.0,
"strength_model": 1.0
}
},
"3": {
"_meta": {
"title": "FluxGuidance"
},
"class_type": "FluxGuidance",
"inputs": {
"conditioning": ["4", 0],
"guidance": 3.5
}
},
"4": {
"_meta": {
"title": "CLIP Text Encode (Positive Prompt)"
},
"class_type": "CLIPTextEncode",
"inputs": {
"clip": ["2", 1],
"text": "A beautiful landscape"
}
},
"5": {
"_meta": {
"title": "KSampler Adv. (Efficient)"
},
"class_type": "KSampler Adv. (Efficient)",
"inputs": {
"add_noise": "enable",
"cfg": 1,
"end_at_step": 10000,
"latent_image": ["6", 0],
"model": ["2", 0],
"negative": ["9", 0],
"noise_seed": 42,
"optional_vae": ["1", 2],
"positive": ["3", 0],
"preview_method": "auto",
"return_with_leftover_noise": "disable",
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"start_at_step": 0,
"steps": 25,
"vae_decode": "true"
}
},
"6": {
"_meta": {
"title": "Empty Latent Image"
},
"class_type": "EmptyLatentImage",
"inputs": {
"batch_size": 1,
"height": 1024,
"width": 1024
}
},
"7": {
"_meta": {
"title": "Save Image"
},
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "flux",
"images": ["5", 5]
}
},
"9": {
"_meta": {
"title": "CLIP Text Encode (Negative Prompt)"
},
"class_type": "CLIPTextEncode",
"inputs": {
"clip": ["1", 1],
"text": ""
}
}
}
#!/usr/bin/env python3
"""
HTTP wrapper for ComfyUI MCP server
Provides HTTP API access to MCP tools while maintaining stdio compatibility
"""
import os
import json
import asyncio
from aiohttp import web
from pathlib import Path
import logging
from datetime import datetime
# Import the MCP server functions directly
from mcp_server import (
list_lora_models, get_lora_info, list_checkpoints,
list_workflows, load_workflow, create_simple_workflow,
ComfyUIClient, COMFYUI_SERVER_URL, LORA_DIR, CHECKPOINT_DIR,
WORKFLOW_DIR, OUTPUT_DIR
)
# Configure logging
logging.basicConfig(
level=os.getenv('LOG_LEVEL', 'INFO'),
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# HTTP routes
routes = web.RouteTableDef()
@routes.get('/')
async def index(request):
"""API documentation"""
return web.json_response({
"service": "ComfyUI MCP HTTP API",
"version": "1.0",
"endpoints": {
"GET /": "This documentation",
"GET /health": "Health check",
"POST /mcp/tool": "Execute MCP tool",
"GET /mcp/tools": "List available tools",
"GET /models/loras": "List LoRA models",
"GET /models/checkpoints": "List checkpoint models",
"GET /workflows": "List workflow templates"
},
"example": {
"url": "POST /mcp/tool",
"body": {
"tool": "generate-image",
"arguments": {
"prompt": "a beautiful sunset",
"width": 1024,
"height": 1024
}
}
}
})
@routes.get('/health')
async def health(request):
"""Health check endpoint"""
try:
# Check if ComfyUI is accessible
client = ComfyUIClient(COMFYUI_SERVER_URL)
stats = await client.get_system_stats()
await client.disconnect_websocket()
return web.json_response({
"status": "healthy",
"comfyui": "connected",
"timestamp": datetime.now().isoformat()
})
except Exception as e:
return web.json_response({
"status": "unhealthy",
"error": str(e),
"timestamp": datetime.now().isoformat()
}, status=503)
@routes.get('/mcp/tools')
async def list_tools(request):
"""List available MCP tools"""
tools = [
{
"name": "generate-image",
"description": "Generate an image using ComfyUI"
},
{
"name": "list-workflows",
"description": "List available workflow templates"
},
{
"name": "get-workflow",
"description": "Get a specific workflow template"
},
{
"name": "submit-workflow",
"description": "Submit a custom workflow"
},
{
"name": "list-loras",
"description": "List available LoRA models"
},
{
"name": "get-lora-info",
"description": "Get LoRA model information"
},
{
"name": "upload-lora",
"description": "Upload a LoRA model"
},
{
"name": "list-checkpoints",
"description": "List available checkpoints"
},
{
"name": "upload-checkpoint",
"description": "Upload a checkpoint model"
},
{
"name": "get-comfyui-nodes",
"description": "List ComfyUI node types"
},
{
"name": "get-node-info",
"description": "Get node information"
},
{
"name": "validate-workflow",
"description": "Validate a workflow"
},
{
"name": "get-generation-status",
"description": "Check generation status"
},
{
"name": "get-system-stats",
"description": "Get system statistics"
}
]
return web.json_response({"tools": tools})
@routes.post('/mcp/tool')
async def execute_tool(request):
"""Execute an MCP tool via HTTP"""
try:
data = await request.json()
tool_name = data.get('tool')
arguments = data.get('arguments', {})
logger.info(f"Executing tool: {tool_name}")
# Import and call the handler directly
from mcp_server import handle_call_tool
# Call with the correct signature: name and arguments
result = await handle_call_tool(tool_name, arguments)
# Extract the text content from the result
if result and len(result) > 0:
text_content = result[0].text
return web.json_response({
"success": True,
"result": text_content
})
else:
return web.json_response({
"success": False,
"error": "No result returned"
}, status=500)
except Exception as e:
logger.error(f"Error executing tool: {e}")
return web.json_response({
"success": False,
"error": str(e)
}, status=500)
@routes.get('/models/loras')
async def get_loras(request):
"""List LoRA models"""
search = request.query.get('search', '')
loras = list_lora_models(search)
return web.json_response({"loras": loras})
@routes.get('/models/loras/{name}')
async def get_lora_details(request):
"""Get LoRA information"""
name = request.match_info['name']
info = get_lora_info(name)
if info:
return web.json_response(info)
else:
return web.json_response({"error": "LoRA not found"}, status=404)
@routes.get('/models/checkpoints')
async def get_checkpoints(request):
"""List checkpoint models"""
checkpoints = list_checkpoints()
return web.json_response({"checkpoints": checkpoints})
@routes.get('/workflows')
async def get_workflows(request):
"""List workflow templates"""
workflows = list_workflows()
return web.json_response({"workflows": workflows})
@routes.get('/workflows/{name}')
async def get_workflow_details(request):
"""Get a specific workflow"""
name = request.match_info['name']
workflow = load_workflow(name)
if workflow:
return web.json_response(workflow)
else:
return web.json_response({"error": "Workflow not found"}, status=404)
async def init_app():
"""Initialize the HTTP application"""
app = web.Application()
app.add_routes(routes)
# Add CORS middleware for browser access
async def cors_middleware(app, handler):
async def middleware_handler(request):
if request.method == 'OPTIONS':
response = web.Response()
else:
response = await handler(request)
response.headers['Access-Control-Allow-Origin'] = '*'
response.headers['Access-Control-Allow-Methods'] = 'GET, POST, OPTIONS'
response.headers['Access-Control-Allow-Headers'] = 'Content-Type'
return response
return middleware_handler
app.middlewares.append(cors_middleware)
return app
async def main():
"""Run the HTTP server"""
app = await init_app()
# Get port from environment or use default
port = int(os.getenv('MCP_HTTP_PORT', '8189'))
logger.info(f"Starting ComfyUI MCP HTTP server on port {port}")
runner = web.AppRunner(app)
await runner.setup()
site = web.TCPSite(runner, '0.0.0.0', port)
await site.start()
logger.info(f"Server running at http://0.0.0.0:{port}")
logger.info(f"API documentation at http://0.0.0.0:{port}/")
# Keep the server running
await asyncio.Event().wait()
if __name__ == '__main__':
asyncio.run(main())
#!/usr/bin/env python3
import os
import json
import uuid
import asyncio
import aiohttp
import websockets
import base64
from pathlib import Path
from typing import List, Dict, Any, Optional
import logging
from datetime import datetime
from mcp.server.models import InitializationOptions
import mcp.types as types
from mcp.server import NotificationOptions, Server
from pydantic import AnyUrl
import mcp.server.stdio
# Configure logging
logging.basicConfig(
level=os.getenv('LOG_LEVEL', 'INFO'),
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
logger = logging.getLogger(__name__)
# Server configuration
COMFYUI_SERVER_URL = os.getenv('COMFYUI_SERVER_URL', 'http://localhost:8188')
WORKFLOW_DIR = Path(__file__).parent # Workflows are in the same directory for gist compatibility
OUTPUT_DIR = Path("/comfyui/output")
LORA_DIR = Path("/comfyui/models/loras")
CHECKPOINT_DIR = Path("/comfyui/models/checkpoints")
# MCP Server instance
server = Server("comfyui-mcp")
# Temporary storage for chunked uploads
chunked_uploads = {}
# Available tools
MCP_TOOLS = [
'generate-image',
'list-workflows',
'get-workflow',
'submit-workflow',
'list-loras',
'get-lora-info',
'upload-lora',
'upload-lora-chunked-start',
'upload-lora-chunked-append',
'upload-lora-chunked-finish',
'list-checkpoints',
'upload-checkpoint',
'get-comfyui-nodes',
'get-node-info',
'validate-workflow',
'get-generation-status',
'get-system-stats',
'list-outputs',
'download-output'
]
class ComfyUIClient:
"""Client for interacting with ComfyUI API"""
def __init__(self, server_url: str):
self.server_url = server_url.rstrip('/')
self.client_id = str(uuid.uuid4())
self.ws = None
async def connect_websocket(self):
"""Connect to ComfyUI websocket"""
ws_url = f"{self.server_url.replace('http', 'ws')}/ws?clientId={self.client_id}"
self.ws = await websockets.connect(ws_url)
logger.info(f"Connected to ComfyUI websocket: {ws_url}")
async def disconnect_websocket(self):
"""Disconnect from ComfyUI websocket"""
if self.ws:
await self.ws.close()
self.ws = None
async def queue_prompt(self, workflow: dict) -> str:
"""Queue a workflow prompt and return the prompt ID"""
async with aiohttp.ClientSession() as session:
data = {
"prompt": workflow,
"client_id": self.client_id
}
async with session.post(f"{self.server_url}/prompt", json=data) as resp:
result = await resp.json()
return result['prompt_id']
async def get_history(self, prompt_id: str) -> dict:
"""Get generation history for a prompt ID"""
async with aiohttp.ClientSession() as session:
async with session.get(f"{self.server_url}/history/{prompt_id}") as resp:
return await resp.json()
async def get_object_info(self) -> dict:
"""Get all available ComfyUI nodes and their info"""
async with aiohttp.ClientSession() as session:
async with session.get(f"{self.server_url}/object_info") as resp:
return await resp.json()
async def get_system_stats(self) -> dict:
"""Get ComfyUI system statistics"""
async with aiohttp.ClientSession() as session:
async with session.get(f"{self.server_url}/system_stats") as resp:
return await resp.json()
async def get_all_history(self, max_items: int = 100) -> dict:
"""Get all generation history (limited to max_items most recent)"""
async with aiohttp.ClientSession() as session:
async with session.get(f"{self.server_url}/history") as resp:
history = await resp.json()
# Sort by timestamp and limit
sorted_items = sorted(history.items(),
key=lambda x: x[1].get('_timestamp', 0),
reverse=True)[:max_items]
return dict(sorted_items)
async def download_output(self, filename: str, subfolder: str = "", output_type: str = "output") -> bytes:
"""Download an output image from ComfyUI"""
async with aiohttp.ClientSession() as session:
params = {
"filename": filename,
"type": output_type
}
if subfolder:
params["subfolder"] = subfolder
async with session.get(f"{self.server_url}/view", params=params) as resp:
if resp.status == 200:
return await resp.read()
else:
raise Exception(f"Failed to download {filename}: HTTP {resp.status}")
async def wait_for_completion(self, prompt_id: str) -> List[str]:
"""Wait for a prompt to complete and return output image paths"""
if not self.ws:
await self.connect_websocket()
output_images = []
while True:
message = await self.ws.recv()
if isinstance(message, str):
data = json.loads(message)
if data['type'] == 'executing':
if data['data']['node'] is None and data['data']['prompt_id'] == prompt_id:
# Execution completed
break
# Get the output images from history
history = await self.get_history(prompt_id)
if prompt_id in history:
for node_id, node_output in history[prompt_id]['outputs'].items():
if 'images' in node_output:
for image in node_output['images']:
filename = image['filename']
subfolder = image.get('subfolder', '')
if subfolder:
image_path = OUTPUT_DIR / subfolder / filename
else:
image_path = OUTPUT_DIR / filename
output_images.append(str(image_path))
return output_images
# Workflow management functions
def list_workflows() -> List[str]:
"""List available workflow templates"""
workflows = []
if WORKFLOW_DIR.exists():
for file in WORKFLOW_DIR.glob("*.json"):
workflows.append(file.stem)
return sorted(workflows)
def load_workflow(name: str) -> Optional[dict]:
"""Load a workflow template by name"""
workflow_path = WORKFLOW_DIR / f"{name}.json"
if workflow_path.exists():
with open(workflow_path, 'r') as f:
return json.load(f)
return None
def list_lora_models(search_term: Optional[str] = None) -> List[str]:
"""List available LoRA models"""
if not LORA_DIR.exists():
return []
lora_files = []
for file in LORA_DIR.glob("*.*"):
if file.suffix.lower() in ['.safetensors', '.pt', '.ckpt', '.bin']:
filename = file.name
if search_term is None or search_term.lower() in filename.lower():
lora_files.append(filename)
return sorted(lora_files)
def get_lora_info(lora_name: str) -> Optional[dict]:
"""Get LoRA model information including metadata if available"""
lora_path = LORA_DIR / lora_name
if not lora_path.exists():
return None
info = {
"name": lora_name,
"path": str(lora_path),
"size": lora_path.stat().st_size,
"modified": datetime.fromtimestamp(lora_path.stat().st_mtime).isoformat()
}
# Check for metadata files (try multiple naming conventions)
base_name = lora_path.stem
metadata_paths = [
lora_path.with_suffix('.json'), # Inkpunk_Flux.json
lora_path.with_suffix('.metadata.json'), # Inkpunk_Flux.metadata.json
LORA_DIR / f"{base_name}.metadata.json" # Inkpunk_Flux.metadata.json
]
for metadata_path in metadata_paths:
if metadata_path.exists():
try:
with open(metadata_path, 'r') as f:
info["metadata"] = json.load(f)
info["metadata_source"] = str(metadata_path.name)
break
except Exception as e:
logger.warning(f"Failed to load metadata from {metadata_path}: {e}")
return info
def list_checkpoints() -> List[str]:
"""List available checkpoint models"""
if not CHECKPOINT_DIR.exists():
return []
checkpoint_files = []
for file in CHECKPOINT_DIR.glob("*.*"):
if file.suffix.lower() in ['.safetensors', '.pt', '.ckpt']:
checkpoint_files.append(file.name)
return sorted(checkpoint_files)
def create_simple_workflow(
prompt: str,
negative_prompt: str = "",
checkpoint: str = "flux1-dev-fp8.safetensors",
lora: Optional[str] = None,
lora_strength: float = 1.0,
width: int = 1024,
height: int = 1024,
batch_size: int = 1,
steps: int = 25,
cfg: float = 3.5,
sampler: str = "euler_ancestral",
seed: Optional[int] = None
) -> dict:
"""Create a simple text-to-image workflow"""
if seed is None:
seed = int.from_bytes(os.urandom(6), 'big')
workflow = {
"1": {
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": checkpoint
}
},
"2": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": prompt,
"clip": ["1", 1]
}
},
"3": {
"class_type": "CLIPTextEncode",
"inputs": {
"text": negative_prompt,
"clip": ["1", 1]
}
},
"4": {
"class_type": "EmptyLatentImage",
"inputs": {
"width": width,
"height": height,
"batch_size": batch_size
}
},
"5": {
"class_type": "KSampler",
"inputs": {
"seed": seed,
"steps": steps,
"cfg": cfg,
"sampler_name": sampler,
"scheduler": "normal",
"denoise": 1.0,
"model": ["1", 0],
"positive": ["2", 0],
"negative": ["3", 0],
"latent_image": ["4", 0]
}
},
"6": {
"class_type": "VAEDecode",
"inputs": {
"samples": ["5", 0],
"vae": ["1", 2]
}
},
"7": {
"class_type": "SaveImage",
"inputs": {
"filename_prefix": f"comfyui_{datetime.now().strftime('%Y%m%d_%H%M%S')}",
"images": ["6", 0]
}
}
}
# Add LoRA if specified
if lora:
workflow["8"] = {
"class_type": "LoraLoader",
"inputs": {
"lora_name": lora,
"strength_model": lora_strength,
"strength_clip": lora_strength,
"model": ["1", 0],
"clip": ["1", 1]
}
}
# Update connections to use LoRA outputs
workflow["2"]["inputs"]["clip"] = ["8", 1]
workflow["3"]["inputs"]["clip"] = ["8", 1]
workflow["5"]["inputs"]["model"] = ["8", 0]
return workflow
# MCP Tool handlers
@server.list_tools()
async def handle_list_tools() -> List[types.Tool]:
"""List available MCP tools"""
return [
types.Tool(
name="generate-image",
description="Generate an image using ComfyUI with a simple workflow",
inputSchema={
"type": "object",
"properties": {
"prompt": {"type": "string", "description": "The positive prompt for image generation"},
"negative_prompt": {"type": "string", "description": "The negative prompt (what to avoid)"},
"checkpoint": {"type": "string", "description": "Checkpoint model to use (use list-checkpoints to see available)"},
"lora": {"type": "string", "description": "LoRA model to use (optional, use list-loras to see available)"},
"lora_strength": {"type": "number", "description": "LoRA strength (0.0-2.0, default 1.0)"},
"width": {"type": "integer", "description": "Image width (default 1024)"},
"height": {"type": "integer", "description": "Image height (default 1024)"},
"batch_size": {"type": "integer", "description": "Number of images to generate (default 1)"},
"steps": {"type": "integer", "description": "Number of sampling steps (default 25)"},
"cfg": {"type": "number", "description": "CFG scale (default 3.5)"},
"sampler": {"type": "string", "description": "Sampler name (default euler_ancestral)"},
"seed": {"type": "integer", "description": "Random seed (optional)"}
},
"required": ["prompt"]
}
),
types.Tool(
name="list-workflows",
description="List available workflow templates",
inputSchema={
"type": "object",
"properties": {},
"required": []
}
),
types.Tool(
name="get-workflow",
description="Get a workflow template by name",
inputSchema={
"type": "object",
"properties": {
"name": {"type": "string", "description": "Name of the workflow template"}
},
"required": ["name"]
}
),
types.Tool(
name="submit-workflow",
description="Submit a custom workflow to ComfyUI",
inputSchema={
"type": "object",
"properties": {
"workflow": {"type": "object", "description": "The workflow JSON object"}
},
"required": ["workflow"]
}
),
types.Tool(
name="list-loras",
description="List available LoRA models",
inputSchema={
"type": "object",
"properties": {
"search": {"type": "string", "description": "Search term to filter LoRAs (optional)"}
},
"required": []
}
),
types.Tool(
name="get-lora-info",
description="Get information about a specific LoRA model",
inputSchema={
"type": "object",
"properties": {
"name": {"type": "string", "description": "LoRA filename"}
},
"required": ["name"]
}
),
types.Tool(
name="upload-lora",
description="Upload a LoRA model to ComfyUI",
inputSchema={
"type": "object",
"properties": {
"filename": {"type": "string", "description": "Filename for the LoRA (must end with .safetensors)"},
"content": {"type": "string", "description": "Base64-encoded file content"},
"metadata": {"type": "object", "description": "Optional metadata for the LoRA"}
},
"required": ["filename", "content"]
}
),
types.Tool(
name="upload-lora-chunked-start",
description="Start a chunked upload for a large LoRA model",
inputSchema={
"type": "object",
"properties": {
"upload_id": {"type": "string", "description": "Unique identifier for this upload session"},
"filename": {"type": "string", "description": "Filename for the LoRA (must end with .safetensors)"},
"total_size": {"type": "integer", "description": "Total file size in bytes"},
"metadata": {"type": "object", "description": "Optional metadata for the LoRA"}
},
"required": ["upload_id", "filename", "total_size"]
}
),
types.Tool(
name="upload-lora-chunked-append",
description="Append a chunk to an ongoing LoRA upload",
inputSchema={
"type": "object",
"properties": {
"upload_id": {"type": "string", "description": "Upload session identifier"},
"chunk": {"type": "string", "description": "Base64-encoded chunk content"},
"chunk_index": {"type": "integer", "description": "Chunk sequence number (starting from 0)"}
},
"required": ["upload_id", "chunk", "chunk_index"]
}
),
types.Tool(
name="upload-lora-chunked-finish",
description="Finalize a chunked LoRA upload",
inputSchema={
"type": "object",
"properties": {
"upload_id": {"type": "string", "description": "Upload session identifier"}
},
"required": ["upload_id"]
}
),
types.Tool(
name="list-checkpoints",
description="List available checkpoint models",
inputSchema={
"type": "object",
"properties": {},
"required": []
}
),
types.Tool(
name="upload-checkpoint",
description="Upload a checkpoint model to ComfyUI",
inputSchema={
"type": "object",
"properties": {
"filename": {"type": "string", "description": "Filename for the checkpoint (must end with .safetensors, .ckpt, or .pt)"},
"content": {"type": "string", "description": "Base64-encoded file content"}
},
"required": ["filename", "content"]
}
),
types.Tool(
name="get-comfyui-nodes",
description="Get all available ComfyUI node types",
inputSchema={
"type": "object",
"properties": {
"category": {"type": "string", "description": "Filter by category (optional)"}
},
"required": []
}
),
types.Tool(
name="get-node-info",
description="Get detailed information about a specific ComfyUI node",
inputSchema={
"type": "object",
"properties": {
"node_type": {"type": "string", "description": "The node class type"}
},
"required": ["node_type"]
}
),
types.Tool(
name="validate-workflow",
description="Validate a workflow before submission",
inputSchema={
"type": "object",
"properties": {
"workflow": {"type": "object", "description": "The workflow to validate"}
},
"required": ["workflow"]
}
),
types.Tool(
name="get-generation-status",
description="Get the status of a generation by prompt ID",
inputSchema={
"type": "object",
"properties": {
"prompt_id": {"type": "string", "description": "The prompt ID returned from generation"}
},
"required": ["prompt_id"]
}
),
types.Tool(
name="get-system-stats",
description="Get ComfyUI system statistics",
inputSchema={
"type": "object",
"properties": {},
"required": []
}
),
types.Tool(
name="list-outputs",
description="List recently generated output images",
inputSchema={
"type": "object",
"properties": {
"max_items": {"type": "integer", "description": "Maximum number of outputs to list (default 20)"}
},
"required": []
}
),
types.Tool(
name="download-output",
description="Download a generated output image as base64",
inputSchema={
"type": "object",
"properties": {
"filename": {"type": "string", "description": "The filename of the output image"},
"subfolder": {"type": "string", "description": "Optional subfolder path"},
"save_to": {"type": "string", "description": "Optional local file path to save the downloaded image"}
},
"required": ["filename"]
}
)
]
@server.call_tool()
async def handle_call_tool(
name: str,
arguments: dict | None
) -> List[types.TextContent | types.ImageContent | types.EmbeddedResource]:
"""Handle tool execution requests"""
if name not in MCP_TOOLS:
raise ValueError(f"Unknown tool: {name}")
client = ComfyUIClient(COMFYUI_SERVER_URL)
try:
if name == "generate-image":
# Create workflow from parameters
workflow = create_simple_workflow(
prompt=arguments.get("prompt", ""),
negative_prompt=arguments.get("negative_prompt", ""),
checkpoint=arguments.get("checkpoint", "flux1-dev-fp8.safetensors"),
lora=arguments.get("lora"),
lora_strength=arguments.get("lora_strength", 1.0),
width=arguments.get("width", 1024),
height=arguments.get("height", 1024),
batch_size=arguments.get("batch_size", 1),
steps=arguments.get("steps", 25),
cfg=arguments.get("cfg", 3.5),
sampler=arguments.get("sampler", "euler_ancestral"),
seed=arguments.get("seed")
)
# Submit workflow and wait for completion
prompt_id = await client.queue_prompt(workflow)
logger.info(f"Queued prompt: {prompt_id}")
output_images = await client.wait_for_completion(prompt_id)
if output_images:
result = f"Generated {len(output_images)} image(s):\n"
result += "\n".join([f"- {img}" for img in output_images])
else:
result = "Generation completed but no images were saved."
return [types.TextContent(type="text", text=result)]
elif name == "list-workflows":
workflows = list_workflows()
if workflows:
result = "Available workflow templates:\n"
result += "\n".join([f"- {w}" for w in workflows])
else:
result = "No workflow templates found."
return [types.TextContent(type="text", text=result)]
elif name == "get-workflow":
workflow_name = arguments.get("name")
workflow = load_workflow(workflow_name)
if workflow:
result = f"Workflow '{workflow_name}':\n\n{json.dumps(workflow, indent=2)}"
else:
result = f"Workflow '{workflow_name}' not found."
return [types.TextContent(type="text", text=result)]
elif name == "submit-workflow":
workflow = arguments.get("workflow")
if not isinstance(workflow, dict):
return [types.TextContent(type="text", text="Invalid workflow format. Must be a JSON object.")]
prompt_id = await client.queue_prompt(workflow)
output_images = await client.wait_for_completion(prompt_id)
if output_images:
result = f"Workflow completed. Generated {len(output_images)} image(s):\n"
result += "\n".join([f"- {img}" for img in output_images])
else:
result = "Workflow completed but no images were saved."
return [types.TextContent(type="text", text=result)]
elif name == "list-loras":
search_term = arguments.get("search")
loras = list_lora_models(search_term)
if loras:
result = "Available LoRA models"
if search_term:
result += f" matching '{search_term}'"
result += ":\n"
result += "\n".join([f"- {lora}" for lora in loras])
else:
result = "No LoRA models found."
return [types.TextContent(type="text", text=result)]
elif name == "get-lora-info":
lora_name = arguments.get("name")
info = get_lora_info(lora_name)
if info:
result = f"LoRA information for '{lora_name}':\n\n{json.dumps(info, indent=2)}"
else:
result = f"LoRA '{lora_name}' not found."
return [types.TextContent(type="text", text=result)]
elif name == "upload-lora":
filename = arguments.get("filename")
content = arguments.get("content")
metadata = arguments.get("metadata")
# Validate filename
if not filename.endswith('.safetensors'):
return [types.TextContent(type="text", text="Error: LoRA filename must end with .safetensors")]
# Ensure filename is safe (no path traversal)
safe_filename = Path(filename).name
lora_path = LORA_DIR / safe_filename
try:
# Decode and save the file
file_data = base64.b64decode(content)
with open(lora_path, 'wb') as f:
f.write(file_data)
# Save metadata if provided
if metadata:
metadata_path = lora_path.with_suffix('.metadata.json')
with open(metadata_path, 'w') as f:
json.dump(metadata, f, indent=2)
result = f"Successfully uploaded LoRA: {safe_filename}\nMetadata saved to: {metadata_path.name}"
else:
result = f"Successfully uploaded LoRA: {safe_filename}"
# Verify it appears in the list
loras = list_lora_models()
if safe_filename in loras:
result += f"\n✓ Verified: LoRA now appears in model list"
return [types.TextContent(type="text", text=result)]
except Exception as e:
# Clean up on failure
if lora_path.exists():
lora_path.unlink()
return [types.TextContent(type="text", text=f"Error uploading LoRA: {str(e)}")]
elif name == "upload-lora-chunked-start":
upload_id = arguments.get("upload_id")
filename = arguments.get("filename")
total_size = arguments.get("total_size")
metadata = arguments.get("metadata")
# Validate filename
if not filename.endswith('.safetensors'):
return [types.TextContent(type="text", text="Error: LoRA filename must end with .safetensors")]
# Ensure filename is safe (no path traversal)
safe_filename = Path(filename).name
# Initialize upload session
chunked_uploads[upload_id] = {
"filename": safe_filename,
"total_size": total_size,
"metadata": metadata,
"chunks": {},
"received_size": 0,
"start_time": datetime.now()
}
return [types.TextContent(type="text", text=f"Chunked upload started for '{safe_filename}'\nUpload ID: {upload_id}\nExpected size: {total_size} bytes")]
elif name == "upload-lora-chunked-append":
upload_id = arguments.get("upload_id")
chunk = arguments.get("chunk")
chunk_index = arguments.get("chunk_index")
# Check if upload session exists
if upload_id not in chunked_uploads:
return [types.TextContent(type="text", text=f"Error: Upload session '{upload_id}' not found")]
session = chunked_uploads[upload_id]
# Decode chunk
try:
chunk_data = base64.b64decode(chunk)
session["chunks"][chunk_index] = chunk_data
session["received_size"] += len(chunk_data)
# Calculate progress
progress = (session["received_size"] / session["total_size"]) * 100
return [types.TextContent(
type="text",
text=f"Chunk {chunk_index} received ({len(chunk_data)} bytes)\nProgress: {progress:.1f}% ({session['received_size']}/{session['total_size']} bytes)"
)]
except Exception as e:
return [types.TextContent(type="text", text=f"Error processing chunk: {str(e)}")]
elif name == "upload-lora-chunked-finish":
upload_id = arguments.get("upload_id")
# Check if upload session exists
if upload_id not in chunked_uploads:
return [types.TextContent(type="text", text=f"Error: Upload session '{upload_id}' not found")]
session = chunked_uploads[upload_id]
filename = session["filename"]
metadata = session.get("metadata")
try:
# Combine all chunks in order
chunk_indices = sorted(session["chunks"].keys())
combined_data = b""
for idx in chunk_indices:
combined_data += session["chunks"][idx]
# Verify size
if len(combined_data) != session["total_size"]:
return [types.TextContent(
type="text",
text=f"Error: Size mismatch. Expected {session['total_size']} bytes, got {len(combined_data)} bytes"
)]
# Save the file
lora_path = LORA_DIR / filename
with open(lora_path, 'wb') as f:
f.write(combined_data)
# Save metadata if provided
if metadata:
metadata_path = lora_path.with_suffix('.metadata.json')
with open(metadata_path, 'w') as f:
json.dump(metadata, f, indent=2)
# Calculate upload time
upload_time = (datetime.now() - session["start_time"]).total_seconds()
# Clean up session
del chunked_uploads[upload_id]
# Verify it appears in the list
loras = list_lora_models()
verified = filename in loras
result = f"Successfully uploaded LoRA: {filename}\n"
result += f"Upload time: {upload_time:.1f} seconds\n"
result += f"File size: {len(combined_data) / (1024*1024):.1f} MB"
if metadata:
result += f"\nMetadata saved"
if verified:
result += f"\n✓ Verified: LoRA now appears in model list"
return [types.TextContent(type="text", text=result)]
except Exception as e:
# Clean up on failure
if upload_id in chunked_uploads:
del chunked_uploads[upload_id]
return [types.TextContent(type="text", text=f"Error finalizing upload: {str(e)}")]
elif name == "list-checkpoints":
checkpoints = list_checkpoints()
if checkpoints:
result = "Available checkpoint models:\n"
result += "\n".join([f"- {ckpt}" for ckpt in checkpoints])
else:
result = "No checkpoint models found."
return [types.TextContent(type="text", text=result)]
elif name == "upload-checkpoint":
filename = arguments.get("filename")
content = arguments.get("content")
# Validate filename
valid_extensions = ['.safetensors', '.ckpt', '.pt']
if not any(filename.endswith(ext) for ext in valid_extensions):
return [types.TextContent(type="text", text=f"Error: Checkpoint filename must end with one of: {', '.join(valid_extensions)}")]
# Ensure filename is safe (no path traversal)
safe_filename = Path(filename).name
checkpoint_path = CHECKPOINT_DIR / safe_filename
try:
# Decode and save the file
file_data = base64.b64decode(content)
with open(checkpoint_path, 'wb') as f:
f.write(file_data)
result = f"Successfully uploaded checkpoint: {safe_filename}"
# Verify it appears in the list
checkpoints = list_checkpoints()
if safe_filename in checkpoints:
result += f"\n✓ Verified: Checkpoint now appears in model list"
return [types.TextContent(type="text", text=result)]
except Exception as e:
# Clean up on failure
if checkpoint_path.exists():
checkpoint_path.unlink()
return [types.TextContent(type="text", text=f"Error uploading checkpoint: {str(e)}")]
elif name == "get-comfyui-nodes":
node_info = await client.get_object_info()
category_filter = arguments.get("category", "").lower()
nodes_by_category = {}
for node_type, info in node_info.items():
category = info.get("category", "Uncategorized")
if category_filter and category_filter not in category.lower():
continue
if category not in nodes_by_category:
nodes_by_category[category] = []
nodes_by_category[category].append(node_type)
result = "Available ComfyUI nodes:\n\n"
for category, nodes in sorted(nodes_by_category.items()):
result += f"{category}:\n"
for node in sorted(nodes):
result += f" - {node}\n"
result += "\n"
return [types.TextContent(type="text", text=result)]
elif name == "get-node-info":
node_type = arguments.get("node_type")
all_nodes = await client.get_object_info()
if node_type in all_nodes:
info = all_nodes[node_type]
result = f"Node information for '{node_type}':\n\n{json.dumps(info, indent=2)}"
else:
result = f"Node type '{node_type}' not found."
return [types.TextContent(type="text", text=result)]
elif name == "validate-workflow":
workflow = arguments.get("workflow")
if not isinstance(workflow, dict):
return [types.TextContent(type="text", text="Invalid workflow format. Must be a JSON object.")]
# Basic validation - check node structure
errors = []
for node_id, node_data in workflow.items():
if not isinstance(node_data, dict):
errors.append(f"Node {node_id}: Invalid node data format")
continue
if "class_type" not in node_data:
errors.append(f"Node {node_id}: Missing 'class_type'")
if "inputs" not in node_data:
errors.append(f"Node {node_id}: Missing 'inputs'")
if errors:
result = "Workflow validation failed:\n"
result += "\n".join([f"- {error}" for error in errors])
else:
result = "Workflow validation passed."
return [types.TextContent(type="text", text=result)]
elif name == "get-generation-status":
prompt_id = arguments.get("prompt_id")
history = await client.get_history(prompt_id)
if prompt_id in history:
status = history[prompt_id]
result = f"Generation status for prompt {prompt_id}:\n\n{json.dumps(status, indent=2)}"
else:
result = f"No history found for prompt ID: {prompt_id}"
return [types.TextContent(type="text", text=result)]
elif name == "get-system-stats":
stats = await client.get_system_stats()
result = f"ComfyUI System Statistics:\n\n{json.dumps(stats, indent=2)}"
return [types.TextContent(type="text", text=result)]
elif name == "list-outputs":
max_items = arguments.get("max_items", 20) if arguments else 20
history = await client.get_all_history(max_items)
outputs = []
for prompt_id, data in history.items():
if 'outputs' in data:
timestamp = data.get('_timestamp', 'unknown')
for node_id, node_output in data['outputs'].items():
if 'images' in node_output:
for image in node_output['images']:
outputs.append({
'prompt_id': prompt_id,
'filename': image['filename'],
'subfolder': image.get('subfolder', ''),
'timestamp': timestamp
})
result = f"Recent output images ({len(outputs)} found):\n\n"
for idx, output in enumerate(outputs):
result += f"{idx + 1}. {output['filename']}\n"
result += f" Prompt ID: {output['prompt_id']}\n"
if output['subfolder']:
result += f" Subfolder: {output['subfolder']}\n"
result += f" Timestamp: {output['timestamp']}\n\n"
if not outputs:
result = "No output images found in recent history."
return [types.TextContent(type="text", text=result)]
elif name == "download-output":
filename = arguments.get("filename")
subfolder = arguments.get("subfolder", "")
save_to = arguments.get("save_to")
if not filename:
return [types.TextContent(type="text", text="Error: filename is required")]
try:
# Download the image
image_data = await client.download_output(filename, subfolder)
# If save_to is specified, save locally
if save_to:
save_path = Path(save_to)
save_path.parent.mkdir(parents=True, exist_ok=True)
with open(save_path, 'wb') as f:
f.write(image_data)
result = f"Downloaded {filename} and saved to {save_path}"
else:
# Return as base64
b64_data = base64.b64encode(image_data).decode()
result = {
"filename": filename,
"size_bytes": len(image_data),
"base64": b64_data
}
return [types.TextContent(type="text", text=json.dumps(result))]
except Exception as e:
return [types.TextContent(type="text", text=f"Error downloading {filename}: {str(e)}")]
except Exception as e:
logger.error(f"Error executing tool {name}: {e}")
return [types.TextContent(type="text", text=f"Error: {str(e)}")]
finally:
await client.disconnect_websocket()
async def main():
"""Run the MCP server"""
logger.info("Starting ComfyUI MCP Server")
# No need to create directories for gist - all files are in root
# Run the server using stdin/stdout streams
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
await server.run(
read_stream,
write_stream,
InitializationOptions(
server_name="comfyui-mcp",
server_version="1.0.0",
capabilities=server.get_capabilities(
notification_options=NotificationOptions(),
experimental_capabilities={},
),
),
)
if __name__ == "__main__":
asyncio.run(main())
{
"1": {
"_meta": {
"title": "Load Checkpoint"
},
"class_type": "CheckpointLoaderSimple",
"inputs": {
"ckpt_name": "ponyDiffusionV6XL.safetensors"
}
},
"2": {
"_meta": {
"title": "Load LoRA"
},
"class_type": "LoraLoader",
"inputs": {
"clip": ["1", 1],
"lora_name": "anime_style.safetensors",
"model": ["1", 0],
"strength_clip": 1.0,
"strength_model": 1.0
}
},
"4": {
"_meta": {
"title": "CLIP Text Encode (Positive Prompt)"
},
"class_type": "CLIPTextEncode",
"inputs": {
"clip": ["2", 1],
"text": "soda, anime style, high quality"
}
},
"5": {
"_meta": {
"title": "KSampler Adv. (Efficient)"
},
"class_type": "KSampler Adv. (Efficient)",
"inputs": {
"add_noise": "enable",
"cfg": 7,
"end_at_step": 10000,
"latent_image": ["6", 0],
"model": ["2", 0],
"negative": ["9", 0],
"noise_seed": 42,
"optional_vae": ["1", 2],
"positive": ["4", 0],
"preview_method": "auto",
"return_with_leftover_noise": "disable",
"sampler_name": "euler_ancestral",
"scheduler": "normal",
"start_at_step": 0,
"steps": 25,
"vae_decode": "true"
}
},
"6": {
"_meta": {
"title": "Empty Latent Image"
},
"class_type": "EmptyLatentImage",
"inputs": {
"batch_size": 1,
"height": 1024,
"width": 1024
}
},
"7": {
"_meta": {
"title": "Save Image"
},
"class_type": "SaveImage",
"inputs": {
"filename_prefix": "pony",
"images": ["5", 5]
}
},
"9": {
"_meta": {
"title": "CLIP Text Encode (Negative Prompt)"
},
"class_type": "CLIPTextEncode",
"inputs": {
"clip": ["1", 1],
"text": "low quality, bad anatomy"
}
}
}
# MCP Server requirements
mcp>=0.1.0
pydantic>=2.0.0
websocket-client>=1.6.0
aiohttp>=3.9.0
websockets>=11.0
PyYAML>=6.0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment