Finetuned LLaVA Model output
Hi, I finetuned the LLaVA-v1.5-13B model and managed to benchmark it on a custom dataset.
I noticed, however, that when benchmarking the finetuned version on that dataset, I didn't get the QA pairs of each image, like I did during the standard model evaluation on that same dataset.
Is there a way to activate those?
Thank you.
Hi, to evlauate a finetuned model, you need to first config it in config.py and use the corresponding model name to complish evaluation ,see #914 for more details.
Hi! I've managed to do that already. The only difference is that during the evaluation, I don't get the QA pairs to show up on the console.
did you check the generate_inner function of the model definition? it is better to provide working cases to better locate the potential bugs.
Yes. This happens with LLaVA only. I'll give you an example, below, where to the left we have part of the evaluation of the standard model and to the right we have part of the evaluation of the finetuned version. On the latter, I don't get the answer like I do in the standard model case.