This guide shows you how to run LTX-2 video generation (text-to-video and image-to-video) using vLLM-Omni as the inference backend and ComfyUI as the frontend.
LTX-2 is a powerful video generation model from Lightricks that supports both text-to-video (T2V) and image-to-video (I2V) generation with audio synthesis.
Resources:
- LTX-2 GitHub: https://github.com/Lightricks/LTX-2 - Python stack for inference and LoRA training, model links