llama-lora-fine-tuning icon indicating copy to clipboard operation
llama-lora-fine-tuning copied to clipboard

RuntimeError:CUDA error : out of memory

Open Hzzhang-nlp opened this issue 2 years ago • 4 comments

Two 3060 graphics cards with a total memory of 24GB, why would this error still be reported? image

Hzzhang-nlp avatar Jun 27 '23 07:06 Hzzhang-nlp

using watch -n 1 "nvidia-smi",Check if other GPU is used

little51 avatar Jun 27 '23 07:06 little51

This is when loading the model image

Hzzhang-nlp avatar Jun 27 '23 07:06 Hzzhang-nlp

Is the wsl ubuntu we use closely related to this?

Hzzhang-nlp avatar Jun 27 '23 07:06 Hzzhang-nlp

try reduce model_max_length to 256, and keep per_device_train_batch_size 1 and per_device_eval_batch_size 1

little51 avatar Jun 27 '23 09:06 little51