litgpt
litgpt copied to clipboard
Falcon 7B fails on 16GB memory with OOM
Cuda OOM on a 16GB GPU memory. Trying out the https://lightning.ai/pages/blog/falcon-a-guide-to-finetune-and-inference/. I tried reducing the batch size and micro_batch size in https://discord.com/channels/1077906959069626439/1116820391885799556/1116820630243909803 however I still see the issue
Tried with the least possible configs.
micro_batch_size = 1 # set to 2 because this is fit into 12GB Vram
gradient_accumulation_iters = batch_size // micro_batch_size```
Same here
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 144.00 MiB. GPU 0 has a total capacty of 22.19 GiB of which 98.50 MiB is free. Including non-PyTorch memory, this process has 22.09 GiB memory in use. Of the allocated memory 21.15 GiB is allocated by PyTorch, and 649.42 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Device 0 [NVIDIA A10G] PCIe GEN 1@ 8x RX: 0.000 kB/s TX: 0.000 kB/s
GPU 0MHz MEM 405MHz TEMP 29°C FAN 0% POW 16 / 300 W
GPU[ 0%] MEM[ 0.3G/24.1G]
I also tried finetuning falcon-7b with micro-bs = 1 & observed the same. The vram consumption is high. one thing we can do we can keep the max_seq_length small maybe around 512.. because by default its using model.config.block_size which is 2048 . So while preparing the data in scripts/prepare_alpaca.py or custom dataset preparation change the max_seq_length & try
NVIDIA-SMI 515.43.04 Driver Version: 515.43.04 CUDA Version: 11.7 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 NVIDIA A100 80G... On | 00000000:65:00.0 Off | 0 | | N/A 58C P0 176W / 300W | 53165MiB / 81920MiB | 60% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+ | 1 NVIDIA A100 80G... On | 00000000:CA:00.0 Off | 0 | | N/A 57C P0 202W / 300W | 45089MiB / 81920MiB | 75% Default | | | | Disabled | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | 0 N/A N/A 2301 G /usr/lib/xorg/Xorg 4MiB | | 0 N/A N/A 3993892 C python 53158MiB | | 1 N/A N/A 2301 G /usr/lib/xorg/Xorg 4MiB | | 1 N/A N/A 3993958 C ...nvs/lit-parrot/bin/python 45082MiB | +-----------------------------------------------------------------------------+
Even the finetune results not great as compared to finetune with lora / qlora.. if someone has observed the same ?
Hi! Here's the memory usage using current master (commit b29ca09) with falcon-7b and always passing --precision 16-true
-
finetune/adapter.py: 32.69 GB (
micro_batch_size=4
), 17.37 GB (micro_batch_size=1
) -
finetune/adapter_v2.py: 41.75 GB (
micro_batch_size=4
), 18.53 GB (micro_batch_size=1
) -
finetune/lora.py: 33.74 GB (
micro_batch_size=4
), 17.5 GB (micro_batch_size=1
)
As @canamika27 suggested, if you force a smaller max_seq_length
, that will take less memory. Also note that even though 16-true precision takes less memory, its training is unstable (#140)
I just merged some improvements to reduce the peak memory usage. Please pull the latest changes.
I'll also be adding a guide for dealing with OOMs with #182. Hope this helps