taming-transformers
taming-transformers copied to clipboard
pytorch still raised "out of memory" but my PYTORCH_CUDA_ALLOC_CONF = "max_split_size_mb:128"
Do I see properly that my Nvidia free memory is 451MiB? If true than why pytorch still raise Exception "out of memory"?
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.39 GiB (GPU 0; 6.00 GiB total capacity; 4.04 GiB already allocated; 478.00 MiB free; 4.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF (venv) PS C:\projects\imageai\venv\stable-diffusion> nvidia-smi Mon Dec 19 13:05:20 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 527.41 Driver Version: 527.41 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A |
| N/A 53C P8 8W / N/A | 451MiB / 6144MiB | 6% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
I don't know where you see 451 MiB. It looks like it says 478 MiB to me. Either way you only have less than 500 MiB free and you're trying to allocate 1.39 GiB to your device. The GPU isn't large enough to handle it.