Litevex
Litevex
Having the same problem `RuntimeError: CUDA out of memory. Tried to allocate 114.00 MiB (GPU 0; 8.00 GiB total capacity; 7.14 GiB already allocated; 0 bytes free; 7.24 GiB reserved...
The same model (latent diffusion 1.6B) does run on 8 GB when using Jack000/glid-3-xl, so it is supposed to work.
Also runs out of VRAM on a 16 GB P100, so something is definitely wrong. This did not happen with the same model on the latent-diffusion repo `RuntimeError: CUDA out...
It appears I did have the problem on the latent-diffusion repo as well, but I fixed my problem on both by adding `os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:4096"` near the start of text2img.py
> > It appears I did have the problem on the latent-diffusion repo as well, but I fixed my problem on both by adding `os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:4096"` near the start...
> entirely possible it's a 6gb GTX 1060. not horrible but not new or anything 6 GB isn't enough for the large latent diffusion model (which isn't the actual stable...
> Hey i have gtx1080ti , will the 1024 model run on it or im getting hopes up? Im using discord SD bot currently but would love to try inpainting....
This could also be useful for https://huggingface.co/stabilityai/sd-vae-ft-mse
Why would you close issues after 17 days, even if they aren't fixed? Also see https://twitter.com/marcan42/status/1581244983528820741
You might have downloaded the wrong model. You need to download from sd-vae-ft-mse-original rather than sd-vae-ft-mse (which is for diffusers)