VACE
VACE copied to clipboard
How to do batch inference?
I wonder if we can modify the model to support batch inference, for example, N prompts for one source video (and mask), and generate N output videos.
We recommend that users modify vace_wan_inference.py themselves to support sequential batch inference for multiple data entries.
You can set batches in comfy. Vace can extend. https://openart.ai/workflows/rocky533/wan-21-vace-extensions-image-or-ref-to-video/wCOmLvDk0XE7ChtodMop https://openart.ai/workflows/rocky533/wan-21-vace-batched-video-to-video/01Ouu12wBn6XQtH24WqQ