ComfyUI
ComfyUI copied to clipboard
RAM is loaded too much(>33gb) while using LoRA with flux dev. When Lora is not used - RAM loaded only for 2gb.
Hi, after explicit debugging I found a source in here: https://github.com/comfyanonymous/ComfyUI/blob/b779349b55e79aff81a98b752f5cb486c71812db/comfy/model_patcher.py#L669
It works differently when I am using LoRA and when I am not. It moves to cuda by running x[2].to(device_to) 0gb when I am using LoRA, and moves a lot(around 22gb) when I am not using LoRA.
Please let me know how to fix it.
Connected issue: https://github.com/comfyanonymous/ComfyUI/issues/4343