Skip to content

Instantly share code, notes, and snippets.

@Glidias
Last active August 1, 2025 04:26
Show Gist options
  • Save Glidias/df873d37d0889a6dbc51f8c9f97fc195 to your computer and use it in GitHub Desktop.
Save Glidias/df873d37d0889a6dbc51f8c9f97fc195 to your computer and use it in GitHub Desktop.

IFV2V VACE File-based Task Batch Manager workflow-specific setup info

For an introduction and the list of main parameters to get you started, refer to the earlier section at: https://gist.github.com/Glidias/b15b51598ae643bab9dbc7aa12fe62ed

These doc only covers anything workflow specific beyond file/json-based stuff.

Model Sampling Shift preset settings

There is a switch to adjust between switch numbers 1-3 for shift settings (Low, Medium, High) above the Sampling workflow group . Default is 2 for Medium but you can change this default depending on the nature of items in your batch source folder. The shift will increase for resolutions moving closer to or at 720p resolution (determined based off the shorter side of the output dimensions).

Workflow group: Save Blended WEBP Inpaints

Within the workflow, there is a group to save a variant of the output video WEBP as an inpaint composite consisting of the output video frames edge-blended in with the originally set up VACE source control video frames through VACE'd control video mask (if any). The edges of the inverted control mask is expanded out and blurred by half of the expansion amount for the blur radius to ensure the output is fully and seamlessly blended in over the original control video images. The blur radius varies depending on the resolution of the video (higher resolution means larger blur radius), but is based off similar to the Model Shift formula conventions.

WEBP image saved in this group are suffixed with --blend, so you get {stem}--stem as the filename prefix.

In most cases, this output variant should not yield any artifacts with the blended version for videos where every frame is inpainted with a localised region for masking and there are no brand new frames generated, thus this allows you to regain the original quality of various portions of the original source image frames you had initially provided for inpainting. But of course, if your output video's saturation differs greatly from the originally provided source images, the results might not work well even with blending at the edges.

None-VACE Workflow Variants

I2V/FLV2V

Calculates total frames to use from respective json/psd/video source definition or else uses the workflow default preset.

Assumes the source assets have no unwanted transparent regions (these will become black) as there are no VACE mask inpainting options available.

Any mask/extramask assets are ignored entirely.

I2V

For folder of images, will treat the last source frame image (alphabetical order) to use as starting image for I2V. For video, will treat the last source frame image (alphabetical order) to use as starting image for I2V. For regular image, will use the image itself.

Variants: FusionX Ingredients, Standard WAN, etc.

FLF2V

Will all source assets, will treat the first source frame as first frame and last source frame as last frame. Any source frames in between are ignored.

Variants: Standard WAN, etc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment