What's the difference between these two finetune methods for enhanced datasets?
AFAIK,there're two methods to continue finetuning based on the previously finetuned model:
- (base model + lora model + dataset1) -> (base model + finetuned model + dataset2)
- (base model + lora model + dataset1) -> (full finetuned model + lora model + dataset2)
here "full finetuned model" in the second method means the training result of using the script "run_finetune_with_lora_save_aggregated_weights.sh"
If dataset2 has better quality, which method should I use? Thanks in advance.
Hi, Currently, we are also curious about the performance of these two methods. It may need more comprehensive experiments to explore. If you have time, feel free to investigate this by experiments and welcome to share results with us.
Thank you!
This issue has been marked as stale because it has not had recent activity. If you think this still needs to be addressed please feel free to reopen this issue. Thanks