InternLM-XComposer
InternLM-XComposer copied to clipboard
server resources required for finetune a lora model
Thanks for your great job, question about finetune lora, I want to know what are the minimum server resources (GPU memory and system memory) required for fine-tuning a LoRa model?
- same question here
- how to convert a finetuned model to a INT4 version manually? would be very appreciated if anyone can reply @yuhangzang
~24GB VRAM, --batch_size 1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --gradient_accumulation_steps 8 --max_length 512 working on a 3090 nvidia.