llama-lora-fine-tuning
llama-lora-fine-tuning copied to clipboard
RuntimeError:CUDA error : out of memory
Two 3060 graphics cards with a total memory of 24GB, why would this error still be reported?
using watch -n 1 "nvidia-smi",Check if other GPU is used
This is when loading the model
Is the wsl ubuntu we use closely related to this?
try reduce model_max_length to 256, and keep per_device_train_batch_size 1 and per_device_eval_batch_size 1