ComfyUI
ComfyUI copied to clipboard
Memory problem with MAC since ComfyUI update
Expected Behavior
Previously, I never have memory problem with ComfyUI nor problem with disk.
Actual Behavior
Now, especially when it comes to vae decoding, the memory increases rapidly and it takes up the whole disc as there was no more space on the disc. I am just doing a basic workflow with a batch size of 3 images 1024 * 1024. It crashed two times the computer. It is so rare on Mac. Now I get from time to time memory problem this was never the case in the past. There is more than 20G available on the disc. Sometimes the problem occurred when we start the workflow and it starts to load the model. We can no longer load two models in the same workflow without any problem. It looks like the disk and memory suddenly skyrocket. It seems that comfyUI doesn't clean the memory and the disk or partially after each image render. I have 16 Giga of RAM
Steps to Reproduce
I can not send you step to reproduce it as it is a memory problem
Debug Logs
got prompt
model_type EPS
Using split attention in VAE
Using split attention in VAE
Requested to load SDXLClipModel
Loading 1 new model
Requested to load CLIPVisionModelProjection
Loading 1 new model
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:42<00:00, 10.60s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 400.22 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:42<00:00, 10.71s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 99.16 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:42<00:00, 10.72s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 76.92 seconds
got prompt
Prompt executed in 0.02 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:41<00:00, 10.36s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 81.67 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:42<00:00, 10.51s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 71.60 seconds
got prompt
Using split attention in VAE
Using split attention in VAE
Requested to load AutoencoderKL
Loading 1 new model
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:14<00:00, 3.58s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 38.31 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:41<00:00, 10.50s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 94.84 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:41<00:00, 10.47s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 96.48 seconds
got prompt
Requested to load SDXL
Loading 1 new model
100%|█████████████████████████████████████████████| 4/4 [00:41<00:00, 10.49s/it]
Requested to load AutoencoderKL
Loading 1 new model
Other
No response
It seems that part of the problem comes from Sonoma and the way it manages memory and cache. Playing comfyUI with no other application open or running in the background is much better. This problem was not present before sonoma.
It seems that since Sonoma, many issues have been occurring, starting with PyTorch compatibility issues.
I should never updated and I am afraid to make the new update. It seems it is better to use firefox thant safari now
Here is an example of a problem but this time it don't crashe the computer
The workflow have two models combine together. Everything was fine and I add a new model and when it starts to load, suddenly the memory increase, the diskspace wwas at maximum and terminal crash. I have to isolate the two model combine and save it as a new model and create a new workflow load this model and the third. This was ok after I have quit the browser and terminal and reopen them.