transformers icon indicating copy to clipboard operation
transformers copied to clipboard

Model loading OOM when using FSDP + QLoRA

Open Neo9061 opened this issue 1 year ago • 3 comments

System Info

Base line. For single instance of p4de.24xlarge (640GB GPU, 1000 GB CPU), i am able to use Q(4-bit)LoRA to train a large model wit size close to 300B. Device_map is set as auto with code as below.

model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    attn_implementation="flash_attention_2",
    torch_dtype=torch.bfloat16,
    quantization_config=bnb_config
)

However, when I use FSDP + QLoRA with 2 p4de.24xlarge instances. Model loading went OOM on CPU.

Can anyone please share some insights? Looking at the from_pretained method's code here and here. Can I get clarification on the following questions? Many thanks.

  1. For FSDP + QLoRA, during model loading, Please comment if my understanding is correct.
  • If model is quantized, then the model is loaded on GPU and further casted into CPU, because of is_quantized in this line) and this comment.
  • If model is not quantized, then the model is directly loaded into CPU.
  1. The OOM happens on CPU as I didn't see any error of "not enough CUDA memory". Thus, for the model that is quantized, when you cast the model into CPU, is only rank 0 doing the job or each of all ranks is casting into CPU leading CPU memory exploding? Same comment for the model that is not quantized during loading.

  2. For quantized model, if you load it firstly into GPU, are you using all GPUs to load the model or using rank 0 to load it?

Who can help?

@SunMarc @ArthurZucker

Information

  • [X] The official example scripts
  • [ ] My own modified scripts

Tasks

  • [X] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [ ] My own task or dataset (give details below)

Reproduction

Distributed-finetuning.zip Here is my code to reproduce the issue

Expected behavior

error free

Neo9061 avatar Jul 01 '24 03:07 Neo9061

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

github-actions[bot] avatar Jul 31 '24 08:07 github-actions[bot]

This should be fixed with #32276. Related: #31577

matthewdouglas avatar Jul 31 '24 15:07 matthewdouglas

Not stale but the PR was reverted!

ArthurZucker avatar Aug 27 '24 12:08 ArthurZucker