WanVideo BlockSwap This feature is great, looking forward to expanding its support range
WanVideo BlockSwap This node is very user-friendly and can greatly save video memory, making comfyui run smoothly. Can this feature support other UNETs or diffusionmodels? I think this node is great, thank you!
For example, FLUX, SDXL, and so on
There is, https://github.com/pollockjj/ComfyUI-MultiGPU
There is, https://github.com/pollockjj/ComfyUI-MultiGPU
Thank you, I just tested it and it doesn't work well. It can only load on GPU and CPU. If CPU is selected, it will be very slow. Without KJ, it is fast and easy to use.
There is, https://github.com/pollockjj/ComfyUI-MultiGPU
Thank you, I just tested it and it doesn't work well. It can only load on GPU and CPU. If CPU is selected, it will be very slow. Without KJ, it is fast and easy to use.
I think there's something wrong with the way you're using it. Use the 'distorch' series nodes, the 'device' should be selected to cuda, and then set a proper value for 'virtual vram'.
There is, https://github.com/pollockjj/ComfyUI-MultiGPU
Thank you, I just tested it and it doesn't work well. It can only load on GPU and CPU. If CPU is selected, it will be very slow. Without KJ, it is fast and easy to use.
I think there's something wrong with the way you're using it. Use the 'distorch' series nodes, the 'device' should be selected to cuda, and then set a proper value for 'virtual vram'.
It seems that there is no UNET model available, only CPU or GPU can be selected. Only the GGUF model has that option. Can you help me see how to use a separate non GGUF UNET model?
There is, https://github.com/pollockjj/ComfyUI-MultiGPU
Thank you, I just tested it and it doesn't work well. It can only load on GPU and CPU. If CPU is selected, it will be very slow. Without KJ, it is fast and easy to use.
I think there's something wrong with the way you're using it. Use the 'distorch' series nodes, the 'device' should be selected to cuda, and then set a proper value for 'virtual vram'.
It seems that there is no UNET model available, only CPU or GPU can be selected. Only the GGUF model has that option. Can you help me see how to use a separate non GGUF UNET model?
Hmm...Usually I use GGUF Q8 for better performance, there is UNET loader node in MultiGPU, but I'm not sure it works as the GGUF one, you can have a try. Sorry I cant try it because I have deleted the fp8 models for space saving.
Tell me how to set up this unit? I don't understand. I have 16 Gb and model 14b fp8 480p. For me, all these "accelerators" only lead to the fact that everything works even slower than a simple WAN example.
WanVideo BlockSwap 这个节点非常人性化,可以大大节省显存,让 comfyui 运行流畅。这个功能能支持其他 UNET 或扩散模型吗?我觉得这个节点很棒,谢谢!
The unet model itself supports reading the model to go to memory before going to video memory
@wuliang19869312 so u mean using gguf makes Block Swap redundant? can u link to some doc to learn about this?
@wuliang19869312 so u mean using gguf makes Block Swap redundant? can u link to some doc to learn about this?
this was a very old thread ;-) wanvideo wrapper works fine with gguf