penghe
penghe
What is your torch version, Also, have you tried reinstall the env.
Hi, I am facing the same issue with the latest code and 13B v0 model. The CLI script can load the 13B v1-1 without problem, but when I try to...
Hi Liu. Yes, I am using the latest code, but I just realized that I haven't update the 13B v0 model, let me try to download the latest v0 model...
I am using the old version of CLI code, maybe this is the issue, give me some time to try with the new code.
UPDATE: after I download the latest 13B delta v0 weight and regenerate the llava model, it works without problem. Thanks for the help!
Do you mean it will give error when image batch size is larger than 1? Also, some error log will be helpful
Hi, great work, since it is the same question, I won't post another issue After having the correct version of transformer, I still get the error using multi gpus for...
Sure, here is the log ``` total gpu resources allocated: 1,2,3,4 Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPVisionModel: ['text_model.encoder.layers.5.layer_norm2.bias', 'text_model.encoder.layers.4.self_attn.v_proj.bias', 'text_model.encoder.layers.9.self_attn.v_proj.weight', 'text_model.encoder.layers.8.mlp.fc2.weight', 'text_model.encoder.layers.7.self_attn.k_proj.bias',...
Sorry, I cannot start the website on server, I will try running it on cpu Yes, I use the same code for multi gpu implementation. Thanks for the help
> Thanks for your response @haotian-liu I tried replacing these lines in your eval script `llava.eval.run_llava.py`: > > ```python > qs = args.query > if mm_use_im_start_end: > qs = qs...