LaWGPT
LaWGPT copied to clipboard
想问一下在指令微调时,是继续在原lora权重基础上训,还是重新训练新的权重
作者有对比过二者的区别吗?
作者有对比过二者的区别吗?
i just found that in finetune.sh, set the following --resume_from_checkpoint "./outputs/train_clm/"
After that, you can continuously train from trained lora weights.