latent-diffusion icon indicating copy to clipboard operation
latent-diffusion copied to clipboard

How do I reduce the batch size while running on the GPU?

Open athu16 opened this issue 3 years ago • 2 comments

The LAION-400M model works fine on the CPU, but when I try to run it on the GPU (RTX 2060 Mobile), I get the following error: RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 6.00 GiB total capacity; 5.21 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch) The most common solution I found online was to reduce the batch size, but since I don't know much about PyTorch, I was hoping someone could help me with it.

athu16 avatar Jun 23 '22 05:06 athu16

you can try to lower the image size or try with --n_samples=1

bartman081523 avatar Jun 28 '22 04:06 bartman081523

Replace

device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")

by

device = torch.device("cpu")

ulysses500 avatar Sep 12 '22 14:09 ulysses500