Add support for wan 2.1 VACE model
Feature Idea
https://huggingface.co/Wan-AI/Wan2.1-VACE-14B
Existing Solutions
No response
Other
No response
@comfyanonymous This model is really cool and I wish to run it with gguf on native comfyui setup.
Ill do an gguf conversion then, which gguf you want exactly? Ill upload that one first
Ill do an gguf conversion then, which gguf you want exactly? Ill upload that one first
The 14b version, thanks
Ill do an gguf conversion then, which gguf you want exactly? Ill upload that one first
The 14b version, thanks
Ik that you want the 14b i mean which gguf quant xD
q8 please
currently converting, ill upload the q8 first then (;
q8 please
Do you have your hugginface name? Ill do a private repo and add you for now, since i dont know it its actually working
You can test it when its online and tell me if the gguf works (;
q8 please
Do you have your hugginface name? Ill do a private repo and add you for now, since i dont know it its actually working
sunnyyy
You should have access now https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF
gonna take around 2h to upload probably, if the conversion is successful that is
now sure if this model will works in native comfyui as it might requires some additional code to operate
I think there is native support for vace now, but only for the full model, so ggufs should work as well. Though I encountered some weird issues with 5d tensors and had to manually fix them, the normal script for wan wasnt working, idk if It worked though /:
Alright the Q8_0 is online, test it out and if it works tell me so ill publish the repo (;
Can you please make it public? I have been searching for the GGUF model of the new VACE 14B model
Just write me your huggingface name and ill add you im only publishing once im sure they work fine
I can not find workflow for wan vace , only Wan2.1 FLF2V & Wan2.1 Fun are mentioned in the blog
Just write me your huggingface name and ill add you im only publishing once im sure they work fine
my hf name is: rrrodri
thank you so much, ill test it right away
I can not find workflow for wan vace , only Wan2.1 FLF2V & Wan2.1 Fun are mentioned in the blog
use WanVaceToVideo node
You should have access now https://huggingface.co/QuantStack/Wan2.1-VACE-14B-GGUF
still can not access it, can you share it again or make it public?here is name copied from hf:sunnyyy
made it public but gated
You should both have access now
It works on my machine, I've got a 4090 and I'm currently generating using this node configuration
But got OOM error when using 12 gb of virtural vram, this model is obviously more heavy than "regular" i2v, because when I use the 720p i2v model I can use 12 gb of virtual vram with no issues; Also this model works with the new Wan21_CausVid_14B_T2V_lora_rank32 lora created by Kijai, @wsbagnsv1 Idk if you knew this model but you can use 8 steps with WAN with very good quality with maybe some loss in the movement of the results, but this is a lora, if you have time can you do a gguf conversion of this model? https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid
Also I think a Q6_K version of VACE-14B could work faster Thank you so much, I was really looking for this conversion
Ive had some issues with distorch (not because of it but comfyui) recently also you can increase virtual vram
Ive had some issues with distorch (not because of it but comfyui) recently also you can increase virtual vram
Oh yes, using 14 virtual_vram_gb also worked, it worked perfectly btw! Thank you so much for your conversion
can you share some results with me on discord? th3pun1sh3r Also ive already made a conversion of https://huggingface.co/lightx2v/Wan2.1-T2V-14B-CausVid , just didnt want to upload further, since the lora exists (;
I have one quant online though if you want i can give you access to that too
I have one quant online though if you want i can give you access to that too
Yes, please; I wanna test if I get different results compared to the lora, also the lora worked with both i2v and t2v models
I'll write you on discord
Also i can highly recommend setting the lora strength to 0.25 and steps to 6, to increase movement and improve quality a lot
https://huggingface.co/QuantStack/Wan2.1-T2V-14B-CausVid-GGUF