ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Add support for wan 2.1 VACE model

Open sunnyyangyangyang opened this issue 7 months ago • 35 comments

Feature Idea

https://huggingface.co/Wan-AI/Wan2.1-VACE-14B

Existing Solutions

No response

Other

No response

sunnyyangyangyang avatar May 14 '25 23:05 sunnyyangyangyang

@comfyanonymous This model is really cool and I wish to run it with gguf on native comfyui setup.

sunnyyangyangyang avatar May 15 '25 20:05 sunnyyangyangyang

Ill do an gguf conversion then, which gguf you want exactly? Ill upload that one first

wsbagnsv1 avatar May 16 '25 13:05 wsbagnsv1

Ill do an gguf conversion then, which gguf you want exactly? Ill upload that one first

The 14b version, thanks

sunnyyangyangyang avatar May 16 '25 13:05 sunnyyangyangyang

Ill do an gguf conversion then, which gguf you want exactly? Ill upload that one first

The 14b version, thanks

Ik that you want the 14b i mean which gguf quant xD

wsbagnsv1 avatar May 16 '25 14:05 wsbagnsv1

q8 please

sunnyyangyangyang avatar May 16 '25 14:05 sunnyyangyangyang

currently converting, ill upload the q8 first then (;

wsbagnsv1 avatar May 16 '25 14:05 wsbagnsv1

q8 please

Do you have your hugginface name? Ill do a private repo and add you for now, since i dont know it its actually working

wsbagnsv1 avatar May 16 '25 14:05 wsbagnsv1

You can test it when its online and tell me if the gguf works (;

wsbagnsv1 avatar May 16 '25 14:05 wsbagnsv1

q8 please

Do you have your hugginface name? Ill do a private repo and add you for now, since i dont know it its actually working

sunnyyy

sunnyyangyangyang avatar May 16 '25 14:05 sunnyyangyangyang

You should have access now https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF

wsbagnsv1 avatar May 16 '25 14:05 wsbagnsv1

gonna take around 2h to upload probably, if the conversion is successful that is

wsbagnsv1 avatar May 16 '25 15:05 wsbagnsv1

now sure if this model will works in native comfyui as it might requires some additional code to operate

sunnyyangyangyang avatar May 16 '25 15:05 sunnyyangyangyang

I think there is native support for vace now, but only for the full model, so ggufs should work as well. Though I encountered some weird issues with 5d tensors and had to manually fix them, the normal script for wan wasnt working, idk if It worked though /:

wsbagnsv1 avatar May 16 '25 15:05 wsbagnsv1

Alright the Q8_0 is online, test it out and if it works tell me so ill publish the repo (;

wsbagnsv1 avatar May 16 '25 16:05 wsbagnsv1

Can you please make it public? I have been searching for the GGUF model of the new VACE 14B model

rrroddri avatar May 16 '25 17:05 rrroddri

Just write me your huggingface name and ill add you im only publishing once im sure they work fine

wsbagnsv1 avatar May 16 '25 17:05 wsbagnsv1

I can not find workflow for wan vace , only Wan2.1 FLF2V & Wan2.1 Fun are mentioned in the blog

sunnyyangyangyang avatar May 16 '25 17:05 sunnyyangyangyang

Just write me your huggingface name and ill add you im only publishing once im sure they work fine

my hf name is: rrrodri

thank you so much, ill test it right away

rrroddri avatar May 16 '25 17:05 rrroddri

I can not find workflow for wan vace , only Wan2.1 FLF2V & Wan2.1 Fun are mentioned in the blog

use WanVaceToVideo node

rrroddri avatar May 16 '25 17:05 rrroddri

You should have access now https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF

still can not access it, can you share it again or make it public?here is name copied from hf:sunnyyy

sunnyyangyangyang avatar May 16 '25 17:05 sunnyyangyangyang

made it public but gated

wsbagnsv1 avatar May 16 '25 18:05 wsbagnsv1

You should both have access now

wsbagnsv1 avatar May 16 '25 18:05 wsbagnsv1

It works on my machine, I've got a 4090 and I'm currently generating using this node configuration

Image

But got OOM error when using 12 gb of virtural vram, this model is obviously more heavy than "regular" i2v, because when I use the 720p i2v model I can use 12 gb of virtual vram with no issues; Also this model works with the new Wan21_CausVid_14B_T2V_lora_rank32 lora created by Kijai, @wsbagnsv1 Idk if you knew this model but you can use 8 steps with WAN with very good quality with maybe some loss in the movement of the results, but this is a lora, if you have time can you do a gguf conversion of this model? https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid

Also I think a Q6_K version of VACE-14B could work faster Thank you so much, I was really looking for this conversion

rrroddri avatar May 16 '25 18:05 rrroddri

Ive had some issues with distorch (not because of it but comfyui) recently also you can increase virtual vram

wsbagnsv1 avatar May 16 '25 18:05 wsbagnsv1

Ive had some issues with distorch (not because of it but comfyui) recently also you can increase virtual vram

Oh yes, using 14 virtual_vram_gb also worked, it worked perfectly btw! Thank you so much for your conversion

rrroddri avatar May 16 '25 18:05 rrroddri

can you share some results with me on discord? th3pun1sh3r Also ive already made a conversion of https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid , just didnt want to upload further, since the lora exists (;

wsbagnsv1 avatar May 16 '25 19:05 wsbagnsv1

I have one quant online though if you want i can give you access to that too

wsbagnsv1 avatar May 16 '25 19:05 wsbagnsv1

I have one quant online though if you want i can give you access to that too

Yes, please; I wanna test if I get different results compared to the lora, also the lora worked with both i2v and t2v models

I'll write you on discord

rrroddri avatar May 16 '25 19:05 rrroddri

Also i can highly recommend setting the lora strength to 0.25 and steps to 6, to increase movement and improve quality a lot

wsbagnsv1 avatar May 16 '25 19:05 wsbagnsv1

https://huggingface.co/QuantStack/Wan2.1-T2V-14B-CausVid-GGUF

wsbagnsv1 avatar May 16 '25 19:05 wsbagnsv1