InternLM-XComposer icon indicating copy to clipboard operation
InternLM-XComposer copied to clipboard

server resources required for finetune a lora model

Open brooks0519 opened this issue 1 year ago • 2 comments

Thanks for your great job, question about finetune lora, I want to know what are the minimum server resources (GPU memory and system memory) required for fine-tuning a LoRa model?

brooks0519 avatar Feb 29 '24 01:02 brooks0519

  1. same question here
  2. how to convert a finetuned model to a INT4 version manually? would be very appreciated if anyone can reply @yuhangzang

iFe1er avatar Mar 04 '24 12:03 iFe1er

~24GB VRAM, --batch_size 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --max_length 512 working on a 3090 nvidia.

thonglv21 avatar Apr 03 '24 06:04 thonglv21