TinyLLaVA_Factory
TinyLLaVA_Factory copied to clipboard
Results are not reproducible on Qwen1.5-1.8B
Hi, I tried training with Qwen1.5-1.8, and I found that the results varied greatly each time. For example, I trained three times and the corresponding evaluations are 46,61,55 (GQA); 37,53,43 (TextVQA). I have followed your default training settings, i.e, global batch size, lr and conv_version, and I'd like to know what caused such a big difference?
Additionally, I would like to ask if I want to add a new LLM, then how can I find the template for this LLM? For example the template for Qwen2.
Thanks for your answer.