After the mm_vision_tower and mm_mlp_adapter of llava-onevision-qwen2-7b-ov were fine-tuned, the model parameter shape did not match after the fine-tuning
After the mm_vision_tower and mm_mlp_adapter of llava-onevision-qwen2-7b-ov were fine-tuned, the model parameter shape did not match after the fine-tuning.
This is my training setting:after fine-tuning, when I load the fine-tuned model there is an error:
I used zero2 and zero3 respectively, but the fine-tuned model would report errors during the loading process. Is this caused by improper parameter setting?
another information for this error:
I had the same problem. I resolved this by following this: https://github.com/LLaVA-VL/LLaVA-NeXT/issues/329
I had the same problem. I resolved this by following this: #329
Thanks a lot, it works.
I had the same problem. I resolved this by following this: #329
Thanks a lot, it works.
@ZhangYuanhan-AI 你好这个确定是可以这样改吗,这个关乎重新微调对不对的问题orz,不知道能不能确认一下~ (*❦ω❦)
I had the same problem. I resolved this by following this: #329
Thanks a lot, it works.
@ZhangYuanhan-AI 你好这个确定是可以这样改吗,这个关乎重新微调对不对的问题orz,不知道能不能确认一下~ (*❦ω❦)
Yes. It works.