Unable to do inference on Falcon-40b fine-tuned using LORA
Hi everyone,
I was able to finetune a Falcon-4b model using the finetune/lora.py script. Now, I am trying to generate responses using the generate/lora.py script, but it gets stuck loading the model, and the inference doesn't work. Can anyone help me with this?
You are using FSDP for inference, right? It won't fit in a single 80GB card. How many devices are you using?
No, I just leave the strategy to "auto" which essentially means that I am not using "FSDP". Also, I tried using both 1 and multiple devices but the model weights just keep loading and the script gets stuck.
The model won't fit into any single 80GB card unless you do quantization. So either you do that or the model needs to be sharded by using FSDP.
I'm don't know why it would get stuck as you describe, but you won't be able to load it anyways without the above techniques enabled.
I tried using FSDP and using 8 80GB A100 GPUs but still, it gets stuck while using the generate/lora.py script.
The commands I am using are:
- python generate/lora.py --lora_path s3/out/adapter/mixed_40b/lit_model_lora_finetuned.pth --checkpoint_dir s3/checkpoints/tiiuae/falcon-40b --strategy "fsdp" --devices 8
- python generate/lora.py --lora_path s3/out/adapter/mixed_40b/lit_model_lora_finetuned.pth --checkpoint_dir s3/checkpoints/tiiuae/falcon-40b --devices 8
@guptashrey Can you please let us know the configurations you used for finetuning? I ran into OOM with 8 80GB A100 GPUs
#207