How do I reduce the batch size while running on the GPU?
The LAION-400M model works fine on the CPU, but when I try to run it on the GPU (RTX 2060 Mobile), I get the following error:
RuntimeError: CUDA out of memory. Tried to allocate 26.00 MiB (GPU 0; 6.00 GiB total capacity; 5.21 GiB already allocated; 0 bytes free; 5.31 GiB reserved in total by PyTorch)
The most common solution I found online was to reduce the batch size, but since I don't know much about PyTorch, I was hoping someone could help me with it.
you can try to lower the image size or try with --n_samples=1
Replace
device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
by
device = torch.device("cpu")