Qwen2.5-VL
Qwen2.5-VL copied to clipboard
visual grounding poor performance after fine-tuning
I am using Llama-factory to do a Qwen2-VL-7b_Instruction lora fine-tuning with Refcoco/+/g datasets. However the performance drops dramatically, from about 90% to almost zero. I already checked this link to fix the M-RoPE bug, but still it doesn't work. Can someone help me? Thank you so much!