multimodal_injection icon indicating copy to clipboard operation
multimodal_injection copied to clipboard

llava_injection.py fail to run

Open sjowoj opened this issue 2 years ago • 5 comments

hello, I find the script llava_injection.py adds new tokens to the tokenizer, but when I run the script, the add operation doesn't work which leads to a runtime error. How do you solve this problem? for example, the DEFAULT_IMAGE_PATCH_TOKEN would have the id 32000, but the vocab size still is 32000(max_id is 31999) which will result in indexSelectLargeIndex error.

sjowoj avatar Sep 23 '23 07:09 sjowoj

hi did you manage to solve this issue?

botbw avatar Dec 14 '23 08:12 botbw

hi did you manage to solve this issue?

I manually add the tokens before the running step, but maybe the final result would be a little different.

sjowoj avatar Dec 14 '23 14:12 sjowoj

hi did you manage to solve this issue?

I manually add the tokens before the running step, but maybe the final result would be a little different.

Could you possibly share a quick and simple solution/script? Thanks for your help!

botbw avatar Dec 15 '23 09:12 botbw

hi did you manage to solve this issue?

I manually add the tokens before the running step, but maybe the final result would be a little different. I also meet this problem, could you share a solution/script? Thank you very much!

deepliao avatar Dec 15 '23 14:12 deepliao

hi did you manage to solve this issue?

I manually add the tokens before the running step, but maybe the final result would be a little different.

Would it be possible to share a solution? Really appreciate it.

rookiehb avatar Sep 06 '24 20:09 rookiehb