ComfyUI
ComfyUI copied to clipboard
Memory estimation/ wan models is broken since release 0.34, impossible to generate long (more than 65 frames) clean videos with loras
Custom Node Testing
- [x] I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
you should be able to generate at least 81 frames for wan T2V (i2v). when using CausVis or any other lora at 480,720p (with built-in nodes)
Actual Behavior
It always gives black or distorted results even if there are plenty of free memory for more than 65 frames
Steps to Reproduce
Load a Wan2.1T2V or Vace (14B) 720p around 10-14GB per file, use CausVIs14blora (any version), connect any other lora. KSampler Steps 4, default or dpmpp_2m scheduler, or ddim... use > 65 length, 81, 105, or 121 frames, etc. you can use 592x832 or 1280x820 resolutions or similar...
Debug Logs
no need
Other
I've fixed it locally by rolling back several changes taken from 0.34 release .. so locally it works as intended , I can generate 121+++ length videos w/o issues even at 720p. but in 0.39+ it's not working, only less than 65 steps
Not sure if related, but since 0.34 I am having to enable cpu offloading to load up SVDQuant models, which worked fine before, which goes in the same direction of apparent differences in VRAM handling/allocation
Idem here (don't know if related) but after 15 day not using it, my workflow using Wan (without any modification) is getting OOM . Seems that the offload is not working anymore :
Offloading model...
Requested to load WanVAE
0 models unloaded.
=>!!! Exception during processing !!! Allocation on device Tryed to change cuda version, pytorch version, 0.34 portable => nothing work , always OOM (torch.OutOfMemoryError: Allocation on device)
win10 - portable version - rtx3060 . was working fine 15 days ago.
Post the actual full error please.
comfy-log.txt pjspeciale.json video-meta.txt
The log, the workflow, and the mp4-meta of the last successfull video generated with this workflow before OOM
I have also encountered almost the same problem. I worked well a month ago, but after updating, it doesn't work anymore。 Using nunkaku with 0.3.0, video memory occupies 99%, I don't know the uninstallation model
当我使用新出的 awg-int4-fux.1-t5xxl.safetensors 时就会出现,使用原来的fp8不会。
I have also encountered almost the same problem. I worked well a month ago, but after updating, it doesn't work anymore。我也遇到差不多同样的问题,一个月前还好好的,更新之后就不行了。 Using nunkaku with 0.3.0, video memory occupies 99%, I don't know the uninstallation model使用 0.3.0 的 nunkaku,显存占用 99%,不知道卸载模式