xtuner icon indicating copy to clipboard operation
xtuner copied to clipboard

How to perform validation during the fine-tune training process on llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune?

Open J0eky opened this issue 1 year ago • 2 comments

I have been fine-tuning the llava-llama3-8b-v1_1 model on my own dataset using the llava_llama3_8b_instruct_full_clip_vit_large_p14_336_lora_e1_gpu8_finetune_copy.py script. While the training phase performed well, I observed an absence of any validation process throughout the training, potentially culminating in overfitting concerns. It would be grateful if someone could offer some advice.

J0eky avatar Jun 06 '24 03:06 J0eky

@J0eky The current main branch code does not support validation during training, but you can use the EvaluateChatHook to see the dialogue effect. We currently have the functionality to perform validation during training on other branches, and we will consider merging this in the future.

hhaAndroid avatar Jun 07 '24 10:06 hhaAndroid

Same problem, requires the validation code while fine-tuning

DwanZhang-AI avatar Jun 12 '24 21:06 DwanZhang-AI