LLaVA-NeXT icon indicating copy to clipboard operation
LLaVA-NeXT copied to clipboard

0.5b model work fine,7b model result is `['']`

Open rookiez7 opened this issue 5 months ago • 1 comments

I redownload this repo,and tried transfoemers version:4.40.0.dev4.40.04.41.2,the result is still ['']. some thing i do include: All weight i use is local weight.below is my change.

  1. Meta-Llama-3-8B-Instruct:llava/conversation.py,line387, tokenizer=AutoTokenizer.from_pretrained("local_path/LLaVA-NeXT/Meta-Llama-3-8B-Instruct")

  2. siglip-so400m-patch14-384:llava-onevision-qwen2-7b-si/config.json,line176, ision_tower": "local_path/siglip-so400m-patch14-384",then some error about mismatch,I use this to fix it.https://github.com/LLaVA-VL/LLaVA-NeXT/issues/148#issuecomment-2298549964

then 0.5b model work fine,7b model result is always [''],below result is 7b model :

(llava) root@sugon:~/work/project/LLaVA-NeXT# python demo_single_image.py 
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loaded LLaVA model: /root/work/project/LLaVA-NeXT_bak/llava-onevision-qwen2-7b-si
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
You are using a model of type llava to instantiate a model of type llava_qwen. This is not supported for all configurations of models and can yield
Loading vision tower: /root/work/project/LLaVA-NeXT/siglip-so400m-patch14-384
Loading checkpoint shards: 100%|___________________________________________________________________________________________________________________
Model Class: LlavaQwenForCausalLM
['']

how can i fix it.please give me some advice

rookiez7 avatar Aug 27 '24 08:08 rookiez7