Smaller test/val loss but lower evaluation accuracy
When I finetune llama-7b on gsm-8k with different finetuning methods. I compared the test loss and evaluation accuracy of different methods and found that one of the method has smaller test/val loss but lower evaluation accuracy. Is it reasonable?
Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the evaluation accuracy only depends on one exact value, which is parsed from the generated text, but the val/test loss is calculated among all the tokens the model generates. We also find that the validation loss may not be a reliable indicator of the generalization performance. For more details, please refer to our paper. Best regards,
Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the evaluation accuracy only depends on one exact value, which is parsed from the generated text, but the val/test loss is calculated among all the tokens the model generates. We also find that the validation loss may not be a reliable indicator of the generalization performance. For more details, please refer to our paper. Best regards,
I wonder the phenomenon discussed in your paper is just in low-fidelity scenario or in general FL?
Hello! It may be related to the scale of your dataset partition. If the test/val dataset is too small, then the loss will be unstable. On the other hand, the evaluation accuracy only depends on one exact value, which is parsed from the generated text, but the val/test loss is calculated among all the tokens the model generates. We also find that the validation loss may not be a reliable indicator of the generalization performance. For more details, please refer to our paper. Best regards,
I wonder the phenomenon discussed in your paper is just in low-fidelity scenario or in general FL?
In the paper, what we observe is in a low-fidelity scenario, but finetuning LLM in general FL, it may be interesting to investigate the relationship between val/test loss and the final evaluation accuracy. I'm not sure there's been a study on this。