gpt-llm-trainer icon indicating copy to clipboard operation
gpt-llm-trainer copied to clipboard

Running into CUDA out of memory on Colab

Open smilinrobin opened this issue 1 year ago • 8 comments

Hello @mshumer . I am trying to run the code on colab and running into CUDA out of memory error as below : OutOfMemoryError Traceback (most recent call last) in <cell line: 14>() 12 13 # Reload model in FP16 and merge it with LoRA weights ---> 14 base_model = AutoModelForCausalLM.from_pretrained( 15 model_name, 16 low_cpu_mem_usage=True,

4 frames /usr/local/lib/python3.10/dist-packages/accelerate/utils/modeling.py in set_module_tensor_to_device(module, tensor_name, device, value, dtype, fp16_statistics) 296 module._parameters[tensor_name] = param_cls(new_value, requires_grad=old_value.requires_grad) 297 elif isinstance(value, torch.Tensor): --> 298 new_value = value.to(device) 299 else: 300 new_value = torch.tensor(value, device=device)

OutOfMemoryError: CUDA out of memory. Tried to allocate 250.00 MiB (GPU 0; 14.75 GiB total capacity; 13.52 GiB already allocated; 48.81 MiB free; 13.63 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Its happening at "Merge the model and store in Google Drive" step.

smilinrobin avatar Aug 15 '23 19:08 smilinrobin