This guide shows you how to run the Hunyuan Video text‑to‑video (T2V) workflow on your M4 MacBook (128 GB RAM with a top‑end GPU). We’ll use the pre‑built dmg file to install ComfyUI (avoiding the need to compile C++ code), set up the workflow in “fast video” mode (reducing inference steps from ~20 to 6–8), and cover downloading and organizing the required model files.
You will install ComfyUI using its pre‑built dmg installer, then configure the Hunyuan Video workflow from the available collection. We focus on using T2V (text‑to‑video) with the “fast video” mode to achieve faster iteration times. With your M4 MacBook’s ample resources, you can experiment confidently while monitoring system performance.
Before starting, make sure you have:
-
Hardware:
Your M4 MacBook with 128 GB RAM and maximum GPU capability (Apple Silicon). -
macOS:
A recent macOS version is required. -
Internet & Disk Space:
Ensure you have a reliable connection and ample disk space (the model files total roughly 16 GB or more). -
Basic Tools:
• A web browser
• (Optional) Git if you wish to inspect or modify workflow files
Using the pre‑built dmg file is the easiest way to install ComfyUI on macOS.
-
Download the dmg File:
– Visit the ComfyUI GitHub Releases page or the official website to locate the latest macOS build.
– Look for the pre‑built dmg installer (it should be clearly labeled for macOS). -
Install ComfyUI:
– Open the downloaded dmg file and follow the standard macOS installation steps (drag the ComfyUI icon into your Applications folder, etc.).
– This build is pre‑bundled with all necessary dependencies. -
Launch ComfyUI:
– Open the ComfyUI application from your Applications folder.
– It will start a local server (typically at http://localhost:8188).
– Verify that the interface loads properly in your web browser.
To run Hunyuan Video T2V, you need the appropriate workflow JSON file:
-
Obtain the Workflow:
– You can browse available workflows in the ComfyUI workflows collection.
– For the Hunyuan Video workflow specifically, download the JSON file (e.g.,Hunyuan_video_macbook.json
) from the ComfyUI-HunyuanVideoWrapper repository. -
Load the Workflow:
– In the ComfyUI web interface, drag and drop the downloaded JSON file onto the canvas.
– The workflow’s nodes and connections should automatically appear.
The Hunyuan Video T2V workflow requires several large model files. Download and place them into the appropriate directories within your ComfyUI installation.
-
Diffusion (Transformer) Model:
- File:
hunyuan_video_t2v_720p_bf16.safetensors
- Recommendation: Use the FP8 version (FP16 has known issues; FP32 is an alternative if needed).
- Destination: Place this file in the
ComfyUI/models/diffusion_models
folder.
- File:
-
VAE Model:
- File:
hunyuan_video_vae_bf16.safetensors
- Destination: Place it in the
ComfyUI/models/vae
folder.
- File:
-
Text Encoder Models:
- Files:
clip_l.safetensors
andllava_llama3_fp8_scaled.safetensors
- Destination: Place them in the
ComfyUI/models/text_encoders
folder.
- Files:
Tip: For the latest information and file versions, check the HunyuanVideo model card on Hugging Face.
The “fast video” mode reduces the number of inference steps from ~20 to 6–8, greatly cutting processing time.
-
Inspect the Workflow in ComfyUI:
– In the canvas, locate and verify the following nodes:- Hunyuan Model Loader Node:
- Set to use the FP8 version and point to
hunyuan_video_t2v_720p_bf16.safetensors
.
- Set to use the FP8 version and point to
- VAE Loader Node:
- Ensure it points to
hunyuan_video_vae_bf16.safetensors
.
- Ensure it points to
- Text Encoder Nodes:
- These should automatically reference
clip_l.safetensors
andllava_llama3_fp8_scaled.safetensors
when first run.
- These should automatically reference
- Hunyuan Model Loader Node:
-
Enable Fast Video Mode:
– Within the workflow (or via a dedicated “fast video” node), switch the inference mode to fast—reducing the number of steps to 6–8.
– Check the node settings or the repository README (in ComfyUI-HunyuanVideoWrapper) for specific instructions. -
Adjust Resolution and Frame Count (Optional):
– Although your MacBook has ample resources, you may reduce the resolution or number of frames if you encounter any memory swapping issues.
Once everything is configured:
-
Prepare Your System:
– Close unnecessary applications (e.g., Google Chrome) to free up resources. -
Queue the Generation:
– In the ComfyUI interface, verify all nodes and model paths are correct.
– Click the “Queue” or “Run” button to start the generation process. -
Monitor Performance:
– Use macOS Activity Monitor to track RAM and GPU usage.
– Previous demos on a 36 GB RAM M3 Pro showed a 6‑step process taking about 16 minutes (~1580 seconds) per video clip. Your M4 MacBook should deliver comparable or improved performance.
-
Missing or Outdated Node Errors:
– If you see errors such as “Missing Node Type VHS_VideoCombine,” download the missing node package from the repository and run:pip install -r requirements.txt
– Alternatively, update ComfyUI to the latest version to benefit from native node support.
-
Module Errors (e.g., “No module named ‘triton’”):
– Such errors are typically resolved by updating to the latest native nodes in ComfyUI. If you are using legacy nodes from the older repositories, consider updating them. -
MPS Issues on Apple Silicon:
– Some users have encountered issues with Metal Performance Shaders (MPS). Ensure you’re running the latest nightly version of PyTorch for macOS as described in Apple’s Accelerated PyTorch training on Mac.
– Consult the ComfyUI Discord server or GitHub discussions for the most recent workarounds.
-
ComfyUI Official:
– ComfyUI Website
– ComfyUI GitHub Releases -
Workflow Repositories:
– ComfyUI Workflows Collection
– ComfyUI-HunyuanVideoWrapper -
HunyuanVideo Model Card:
– HunyuanVideo on Hugging Face -
ComfyUI Community Channels:
– ComfyUI Discord Server
– Matrix Space for ComfyUI
-
Installation:
– Use the pre‑built dmg file for an easy, hassle‑free installation on macOS. -
Workflow Settings:
– For rapid iteration and reduced processing time, enable the “Hunyuan Fast Video” mode (6–8 steps).
– Switch to the standard ~20‑step workflow only if you require extra detail and are willing to wait longer. -
Software Updates:
– Regularly update ComfyUI and any custom nodes to benefit from fixes (including MPS issues) and performance enhancements. -
Resource Monitoring:
– Even with 128 GB RAM, video generation is resource‑intensive. Adjust settings like resolution or frame count if you notice excessive memory swapping.
By following these detailed steps and using the corrected URLs, you should be well‑equipped to run the Hunyuan Video T2V workflow on your M4 MacBook. Enjoy generating AI‑powered video content, and feel free to reach out via Discord or GitHub if you encounter any issues!
Happy generating!