stable-diffusion
stable-diffusion copied to clipboard
Textual Inversion
Receiving Error: Out Of Memory | When trying to train, assist please.
Traceback (most recent call last):
File "C:\Users\moham\stable-diffusion-webuia\modules\textual_inversion\textual_inversion.py", line 503, in train_embedding
scaler.scale(loss).backward()
File "C:\Users\moham\stable-diffusion-webuia\venv\lib\site-packages\torch\_tensor.py", line 488, in backward
torch.autograd.backward(
File "C:\Users\moham\stable-diffusion-webuia\venv\lib\site-packages\torch\autograd\__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
File "C:\Users\moham\stable-diffusion-webuia\venv\lib\site-packages\torch\autograd\function.py", line 267, in apply
return user_fn(self, *args)
File "C:\Users\moham\stable-diffusion-webuia\venv\lib\site-packages\torch\utils\checkpoint.py", line 157, in backward
torch.autograd.backward(outputs_with_grad, args_with_grad)
File "C:\Users\moham\stable-diffusion-webuia\venv\lib\site-packages\torch\autograd\__init__.py", line 197, in backward
Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 4.00 GiB total capacity; 2.99 GiB already allocated; 0 bytes free; 3.39 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
It looks like you don't have enough GPU memory to be able to train this model.
Look at https://github.com/CompVis/stable-diffusion/issues/39
From the error-message I am assuming your question is about the stable diffusion webui by AUTOMATIC1111. Please be aware this is a third-party tool and we cannot provide any support for it. You may want to look into the issues there, both closed and open to find out if other people have reported this problem before, or open a new issue if that is not the case.