blog icon indicating copy to clipboard operation
blog copied to clipboard

Expected Memory to load the model

Open zifuwan opened this issue 2 years ago • 0 comments

Hi author,

Thanks for the great work. I'm trying to run the demo on an RTX6000 GPU(24G):

import torch
from PIL import Image
# setup device to use
device = torch.device("cuda:7") if torch.cuda.is_available() else "cpu"
# load sample image
raw_image = Image.open("knife1.png").convert("RGB")
raw_image = raw_image.resize((64, 64))

from lavis.models import load_model_and_preprocess
# loads BLIP-2 pre-trained model
model, vis_processors, _ = load_model_and_preprocess(name="blip2_t5", model_type="pretrain_flant5xxl", is_eval=True, device=device)
# prepare the image
image = vis_processors["eval"](raw_image).unsqueeze(0).to(device)

model.generate({"image": image, "prompt": "Question: What's inside the blue box? Answer:"})
# 'singapore'

However, I met with CUDA out of memory error every time. Could you tell me if this is normal and are there any solutions to it?

Thanks.

zifuwan avatar Oct 03 '23 13:10 zifuwan