cog-stable-diffusion
cog-stable-diffusion copied to clipboard
CUDA out of memory - SD2.1
When asking for 4 outputs - with everything else the default, I sometimes get:
Output
CUDA out of memory. Tried to allocate 12.66 GiB (GPU 0; 39.59 GiB total capacity; 19.58 GiB already allocated; 5.69 GiB free; 32.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Antidotally, this occurred after I triggered an NSFW exception on the API.
Perhaps throwing an exception doesn't allow torch to reclaim GPU memory - or perhaps it is unrelated