ComfyUI
ComfyUI copied to clipboard
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory)
Getting this error message repeatedly on my ksampler. My workflow is really simple so im not trying to do anything crazy. I tried deleting and reinstalling comfyui. i deleted all unnecessary custom nodes. I am using shadowtech pro so I have a pretty good gpu and cpu.
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 18.38 GiB Requested : 6.09 GiB Device limit : 16.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB
Do you use SAMLoader? In that case change device mode to 'CPU'.
It used to work until yesterday perfectly. I am not exactly sure what a SAMLoader is, is it an ImageLoader or a VideoLoader? I dont use a SamLoader and I tried using CPU but its really slow and i didn't even get to the ksampler it took a long time in vae encode. Is there a way to divide vid2vid into batch so the video processes into batches or something?
It used to work until yesterday perfectly. I am not exactly sure what a SAMLoader is, is it an ImageLoader or a VideoLoader? I dont use a SamLoader and I tried using CPU but its really slow and i didn't even get to the ksampler it took a long time in vae encode. Is there a way to divide vid2vid into batch so the video processes into batches or something?
If you're not using the SAMLoader node, NVM. That means there might be another cause in that case.
Do you think it's an issue with shadowtech being slow or laggy? Is there a guide you can point me for using comfyui on colab instead? thanks
Getting this error message repeatedly on my ksampler. My workflow is really simple so im not trying to do anything crazy. I tried deleting and reinstalling comfyui. i deleted all unnecessary custom nodes. I am using shadowtech pro so I have a pretty good gpu and cpu.
torch.cuda.OutOfMemoryError: Allocation on device 0 would exceed allowed memory. (out of memory) Currently allocated : 18.38 GiB Requested : 6.09 GiB Device limit : 16.00 GiB Free (according to CUDA): 0 bytes PyTorch limit (set by user-supplied memory fraction) : 17179869184.00 GiB
after last update : SDXL-model + any lora = same result
Same issue since today. I used to be able to load several lora, controlnet and SDXL and now I can't even generate a 768x768 image without hitting an OOM error. Something must have happened with VRAM usage.
EDIT : confirmed, rolled back to 4acfc11a802fad4e90103f9fd3cf73cb0c9b5ae1
and VRAM usage isn't a problem anymore. Could someone bisect further? I picked an old commit at random.
That error is fixed now
@comfyanonymous was there a regression? I just hit this on commit 733947
aka backup_branch_2023-11-13_09_38_41
Same issue as @lstyls - any idea how to fix this?
That error is fixed now
how to fix it? run comfyui with --disable-smart-memory? or anything else? I got the similar problem
I think I fixed it by downscaling my input images and doing no more than 300 images in a batch.
getting the same issu as well since today (i have update my nvidia drivers today dont know if it's link)
Same issue since today. I used to be able to load several lora, controlnet and SDXL and now I can't even generate a 768x768 image without hitting an OOM error. Something must have happened with VRAM usage.
EDIT : confirmed, rolled back to
4acfc11a802fad4e90103f9fd3cf73cb0c9b5ae1
and VRAM usage isn't a problem anymore. Could someone bisect further? I picked an old commit at random.
what does"rolled back to 4acfc11a802fad4e90103f9fd3cf73cb0c9b5ae1
"mean?
I got the same issue, i used to be able to generate 512*768 animation and upscale it by 1.3 using animatediff, 2 controlnet, ipadapter, lcm , unet, it doesn't report any error when i generate my 1st animation, but reported cuda out of memory error when i was trying to generate the 2nd one. I tried many suggestions, finally after restarting my PC, there was no error. But it is inconvenient to restart PC after every animation generating.
same here, out of memory errors since update to cu121 portable, on cuda12.3, xformers 0.23 post1, onxx runtime. after --disable-smart-memory normal sdxl workflow works again. vid2vid does not and goes very slow, not even 15 frames without an error