Prince Verma
Prince Verma
https://github.com/huggingface/transformers/blob/87b30c35892568f9b83d4e8d1233956b8e0cd96c/src/transformers/models/qwen2_vl/modeling_qwen2_vl.py#L1708 i believe we're not calculating ROPE index's in MLX-VLM, which is causing the problem, once i comment out this section in transforms, i get the same issue there as...
@Blaizzy can you re-open this issue, there seems to be issue as stated above. Im trying on work on the patch meanwhile
@Blaizzy also can you review the issue is correct ?
@Blaizzy https://github.com/Blaizzy/mlx-vlm/pull/319 can you review this PR this has been handled now
Thank you @ddupont808 , i'll be raising the fix by tomorrow, was away for a couple of days
@ddupont808 i've raised the fix, please pull and update
@Blaizzy any chance you were able to do it?