Tesla V100 32GB 运行finetuning_lora.py 报错:torch.cuda.OutOfMemoryError: CUDA out of memory
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB (GPU 0; 31.75 GiB total capacity; 30.26 GiB already allocated; 68.69 MiB free; 30.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Tesla V100 32GB可以运行 finetuning_freeze.py和finetuning_pt.py,但是运行finetuning_lora.py 显存溢出 finetuning_lora.py 需要什么样的配置呢?
3090 *2 两张3090也遇到cuda溢出的问题了,用的lora,请问怎么解决
可以最大长度调小一些
3090 *2 两张3090也遇到cuda溢出的问题了,用的lora,请问怎么解决
请问使用3090*2进行部分参数微调成功了吗?