LAVIS
LAVIS copied to clipboard
The output is error..what's happened?
Did 仮�� lots Sep beside „香�陈langle lots curios Profilelangle lots Sep beside „香�陈langle lots curios Profilelangle lots Sep beside „香�陈langle lots curios Profilelangle lots Sep beside „获ensuremath
Please provide more information, thanks.
i have the same error, just follow the instruction of instruction BLIP which use the weights of vicuna 13B
Same Issue.
I have solved. The error is caused by the wrong version of vicuna.
- Have you prepared the Vicuna weights?
- What is the input image and instruction?
Thanks for your reply, i have found the reason that it was cause by the wrong version of vicuna weights. Currectly, we should use vicuna v1.1 not v0. We change the version and then the problem has been solved.
I download the pretrained vicuna model from here: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/tree/main
And I'm running this code:
import torch
from PIL import Image
# setup device to use
device = torch.device("cuda") if torch.cuda.is_available() else "cpu"
# load sample image
raw_image = Image.open("../../docs/_static/Confusing-Pictures.jpg").convert("RGB")
from lavis.models import load_model_and_preprocess
# loads InstructBLIP model
print('Loading model...')
model, vis_processors, _ = load_model_and_preprocess(name="blip2_vicuna_instruct", model_type="vicuna13b", is_eval=True, device=device)
print('Loading model done!')
# prepare the image
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)
output = model.generate({"image": image, "prompt": "What is unusual about this image?"})
print(output)
Got the same issue, do you have any ideas about this? @LiJunnan1992
I download the pretrained vicuna model from here: https://huggingface.co/lmsys/vicuna-13b-delta-v1.1/tree/main
And I'm running this code:
import torch from PIL import Image # setup device to use device = torch.device("cuda") if torch.cuda.is_available() else "cpu" # load sample image raw_image = Image.open("../../docs/_static/Confusing-Pictures.jpg").convert("RGB") from lavis.models import load_model_and_preprocess # loads InstructBLIP model print('Loading model...') model, vis_processors, _ = load_model_and_preprocess(name="blip2_vicuna_instruct", model_type="vicuna13b", is_eval=True, device=device) print('Loading model done!') # prepare the image image = vis_processors["eval"](raw_image).unsqueeze(0).to(device) output = model.generate({"image": image, "prompt": "What is unusual about this image?"}) print(output)
Got the same issue, do you have any ideas about this? @LiJunnan1992
I should apply delta to the original LLaMA pretrained model, sry for this