litgpt icon indicating copy to clipboard operation
litgpt copied to clipboard

Unable to do inference on Falcon-40b fine-tuned using LORA

Open guptashrey opened this issue 2 years ago • 6 comments

Hi everyone,

I was able to finetune a Falcon-4b model using the finetune/lora.py script. Now, I am trying to generate responses using the generate/lora.py script, but it gets stuck loading the model, and the inference doesn't work. Can anyone help me with this?

guptashrey avatar Jun 23 '23 19:06 guptashrey

You are using FSDP for inference, right? It won't fit in a single 80GB card. How many devices are you using?

carmocca avatar Jun 24 '23 17:06 carmocca

No, I just leave the strategy to "auto" which essentially means that I am not using "FSDP". Also, I tried using both 1 and multiple devices but the model weights just keep loading and the script gets stuck.

guptashrey avatar Jun 24 '23 20:06 guptashrey

The model won't fit into any single 80GB card unless you do quantization. So either you do that or the model needs to be sharded by using FSDP.

I'm don't know why it would get stuck as you describe, but you won't be able to load it anyways without the above techniques enabled.

carmocca avatar Jun 24 '23 21:06 carmocca

I tried using FSDP and using 8 80GB A100 GPUs but still, it gets stuck while using the generate/lora.py script.

The commands I am using are:

  1. python generate/lora.py --lora_path s3/out/adapter/mixed_40b/lit_model_lora_finetuned.pth --checkpoint_dir s3/checkpoints/tiiuae/falcon-40b --strategy "fsdp" --devices 8
  2. python generate/lora.py --lora_path s3/out/adapter/mixed_40b/lit_model_lora_finetuned.pth --checkpoint_dir s3/checkpoints/tiiuae/falcon-40b --devices 8

guptashrey avatar Jun 26 '23 23:06 guptashrey

@guptashrey Can you please let us know the configurations you used for finetuning? I ran into OOM with 8 80GB A100 GPUs

gpravi avatar Jun 28 '23 22:06 gpravi

#207

weilong-web avatar Jun 29 '23 07:06 weilong-web