ChatGLM-Finetuning icon indicating copy to clipboard operation
ChatGLM-Finetuning copied to clipboard

Tesla V100 32GB 运行finetuning_lora.py 报错:torch.cuda.OutOfMemoryError: CUDA out of memory

Open Test202010 opened this issue 3 years ago • 3 comments

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 31.75 GiB total capacity; 30.26 GiB already allocated; 68.69 MiB free; 30.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Tesla V100 32GB可以运行 finetuning_freeze.py和finetuning_pt.py,但是运行finetuning_lora.py 显存溢出 finetuning_lora.py 需要什么样的配置呢?

Test202010 avatar Apr 13 '23 08:04 Test202010

3090 *2 两张3090也遇到cuda溢出的问题了,用的lora,请问怎么解决

janglichao avatar Apr 13 '23 12:04 janglichao

可以最大长度调小一些

liucongg avatar Apr 16 '23 07:04 liucongg

3090 *2 两张3090也遇到cuda溢出的问题了,用的lora,请问怎么解决

请问使用3090*2进行部分参数微调成功了吗?

199843 avatar May 08 '23 06:05 199843