ChatGLM-Finetuning icon indicating copy to clipboard operation
ChatGLM-Finetuning copied to clipboard

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 44.99 GiB total capacity; 6.15 GiB already allocated; 37.32 GiB free; 6.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF (bab) root@DESKTOP-D1BLEUL:/mnt/c/Users/oo/chat/ChatGLM-Finetuning# nano finetuning_freeze.py

Open dragononly opened this issue 1 year ago • 1 comments

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 44.99 GiB total capacity; 6.15 GiB already allocated; 37.32 GiB free; 6.15 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF (bab) root@DESKTOP-D1BLEUL:/mnt/c/Users/oo/chat/ChatGLM-Finetuning# nano finetuning_freeze.py

dragononly avatar Apr 18 '23 09:04 dragononly