add model cache after loaded
Hello, I made a PR to cache model after loaded. save the model instance in memory, that's really will cost some memory here, but dont worry, it will check the memory free both cpu and gpu, and free the memory when it's necessary. And this is the update version of https://github.com/comfyanonymous/ComfyUI/pull/3545.
Here is a gif example to show it, and it's really amazing.
Fully tested on openart template workflows. -> https://openart.ai/workflows/templates
looks great ! how to use it?
looks great ! how to use it?
You can try with this repo repository https://github.com/efwfe/ComfyUI.git,
The bigger the memory, the better performance.
Thank you for wanting to use.
Hi can you keep your fork up to date please? I want to test this.
Hi can you keep your fork up to date please? I want to test this.
Welcome to test it. It has been updated. Hope to know your feedback. It is recommended to add --low-vram parameter to reduce the memory usage of sdxl models.
Sorry for asking a silly question. I couldn't find the definition of the variable 'ckpt_path' in the function 'load_state_dict_guess_config' within the sd.py file, and PyCharm is reporting a compilation error: 'Unresolved reference 'ckpt_path''.
Sorry for asking a silly question. I couldn't find the definition of the variable 'ckpt_path' in the function 'load_state_dict_guess_config' within the sd.py file, and PyCharm is reporting a compilation error: 'Unresolved reference 'ckpt_path''.
Sorry,that's my mistake, the latest version of comfyui remove the parameter of ckpt_path, I have updated the code, you can try it now.
Actually I'm ready to close this PR because it's not really helpful in most cases. So I closed this thanks you guys.
I think it is very useful, I hope you can continue to update, thank you, cool
Hi! Nice project and solution!
I see your workflow, do nothing but cache cpt. so if I want to use it, i just update your script into comfyui and start it, right?
Any other operations should do?
@efwfe hi, I think it is very useful! Thank you for your great work! I want to know if it is still available now and if it supports caching for the Lora model?