LAVIS icon indicating copy to clipboard operation
LAVIS copied to clipboard

The output is error..what's happened?

Open winnerahao opened this issue 1 year ago • 5 comments

Did 仮�� lots Sep beside „香�陈langle lots curios Profilelangle lots Sep beside „香�陈langle lots curios Profilelangle lots Sep beside „香�陈langle lots curios Profilelangle lots Sep beside „获ensuremath

winnerahao avatar May 15 '23 10:05 winnerahao

Please provide more information, thanks.

LiJunnan1992 avatar May 16 '23 05:05 LiJunnan1992

i have the same error, just follow the instruction of instruction BLIP which use the weights of vicuna 13B

WayneTomas avatar May 18 '23 08:05 WayneTomas

Same Issue.

I have solved. The error is caused by the wrong version of vicuna.

LibertFan avatar May 18 '23 11:05 LibertFan

  1. Have you prepared the Vicuna weights?
  2. What is the input image and instruction?

LiJunnan1992 avatar May 18 '23 23:05 LiJunnan1992

Thanks for your reply, i have found the reason that it was cause by the wrong version of vicuna weights. Currectly, we should use vicuna v1.1 not v0. We change the version and then the problem has been solved.

WayneTomas avatar May 19 '23 02:05 WayneTomas

I download the pretrained vicuna model from here: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/tree/main

And I'm running this code:

import torch
from PIL import Image
# setup device to use
device = torch.device("cuda") if torch.cuda.is_available() else "cpu"
# load sample image
raw_image = Image.open("../../docs/_static/Confusing-Pictures.jpg").convert("RGB")


from lavis.models import load_model_and_preprocess
# loads InstructBLIP model
print('Loading model...')
model, vis_processors, _ = load_model_and_preprocess(name="blip2_vicuna_instruct", model_type="vicuna13b", is_eval=True, device=device)
print('Loading model done!')
# prepare the image
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)

output = model.generate({"image": image, "prompt": "What is unusual about this image?"})
print(output)

Got the same issue, do you have any ideas about this? @LiJunnan1992

rentainhe avatar Jun 15 '23 12:06 rentainhe

I download the pretrained vicuna model from here: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/tree/main

And I'm running this code:

import torch
from PIL import Image
# setup device to use
device = torch.device("cuda") if torch.cuda.is_available() else "cpu"
# load sample image
raw_image = Image.open("../../docs/_static/Confusing-Pictures.jpg").convert("RGB")


from lavis.models import load_model_and_preprocess
# loads InstructBLIP model
print('Loading model...')
model, vis_processors, _ = load_model_and_preprocess(name="blip2_vicuna_instruct", model_type="vicuna13b", is_eval=True, device=device)
print('Loading model done!')
# prepare the image
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)

output = model.generate({"image": image, "prompt": "What is unusual about this image?"})
print(output)

Got the same issue, do you have any ideas about this? @LiJunnan1992

I should apply delta to the original LLaMA pretrained model, sry for this

rentainhe avatar Jun 15 '23 15:06 rentainhe