ComfyUi crashes when running big loras on Flux
Expected Behavior
I have no issues loading and running loras when they aren't that big.
Actual Behavior
When the lora is too big (around 1GB), ComfyUi simply crashes, for example, I can't run this lora because of that: https://huggingface.co/ByteDance/Hyper-SD/blob/main/Hyper-FLUX.1-dev-16steps-lora.safetensors
Steps to Reproduce
Here's my workflow:
Debug Logs
That's the problem, there are no message errors when it crashes, I simply have this:
got prompt
6%|█████▏ | 1/16 [00:05<01:23, 5.60s/it]
D:\ComfyUI_windows_portable>pause
Press any key to continue . . .
Other
No response
Can you use other big loras, like this one ( 1.28 GB ) https://civitai.com/models/641309/formcorrector-anatomic?modelVersionId=717317
I can use it with Flux FP8 and --reserve-vram 1.2
Or with GGUF models, without any flags.
https://github.com/city96/ComfyUI-GGUF
Just tested:
- GTX 1070 ( 8GB )
- 32 GB VRAM
- Windows 10
Hyper-FLUX.1-dev-8steps-lora.safetensors works for me in Flux FP8 and --reserve-vram 1.2
I don't know what is happening today, but lately flux goes haywire on me all the time, it used to work fine, but I can barely get 4 images out every two hours with mountains of errors and issues that did not used to be there all related to memory.
Yeah, I think there is an issue on the GGUF side. https://github.com/city96/ComfyUI-GGUF/issues/75
EDIT: this issue, with GGUF models, seems to be fixed now.