Initializing Chat
usage: demo.py [-h] --cfg-path CFG_PATH [--options OPTIONS [OPTIONS ...]]
demo.py: error: unrecognized arguments: --gpu-id 0
不知道为什么报这个错误
Apr 20
'23 09:04
srtj
Hello! Do you use the latest code? We updated the code yesterday to include the argument gpu-id to specify the GPU device
Can you share the whole error message?
Can you share the whole error message?
E:\github\MiniGPT-4\demo.py:60 in │
│ │
│ 57 model_config = cfg.model_cfg │
│ 58 model_config.device_8bit = args.gpu_id │
│ 59 model_cls = registry.get_model_class(model_config.arch) │
│ ❱ 60 model = model_cls.from_config(model_config).to('cuda:{}'.format(args.gpu_id)) │
│ 61 │
│ 62 vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train │
│ 63 vis_processor = registry.get_processor_class(vis_processor_cfg.name).from_config(vis_pro │
│ │
│ E:\github\MiniGPT-4\minigpt4\models\mini_gpt4.py:243 in from_config │
│ │
│ 240 │ │ max_txt_len = cfg.get("max_txt_len", 32) │
│ 241 │ │ end_sym = cfg.get("end_sym", '\n') │
│ 242 │ │ │
│ ❱ 243 │ │ model = cls( │
│ 244 │ │ │ vit_model=vit_model, │
│ 245 │ │ │ q_former_model=q_former_model, │
│ 246 │ │ │ img_size=img_size, │
│ │
│ E:\github\MiniGPT-4\minigpt4\models\mini_gpt4.py:74 in init │
│ │
│ 71 │ │ for layer in self.Qformer.bert.encoder.layer: │
│ 72 │ │ │ layer.output = None │
│ 73 │ │ │ layer.intermediate = None │
│ ❱ 74 │ │ self.load_from_pretrained(url_or_filename=q_former_model) │
│ 75 │ │ │
│ 76 │ │ if freeze_qformer: │
│ 77 │ │ │ for name, param in self.Qformer.named_parameters(): │
│ │
│ E:\github\MiniGPT-4\minigpt4\models\blip2.py:77 in load_from_pretrained │
│ │
│ 74 │ │ │ cached_file = download_cached_file( │
│ 75 │ │ │ │ url_or_filename, check_hash=False, progress=True │
│ 76 │ │ │ ) │
│ ❱ 77 │ │ │ checkpoint = torch.load(cached_file, map_location="cpu") │
│ 78 │ │ elif os.path.isfile(url_or_filename): │
│ 79 │ │ │ checkpoint = torch.load(url_or_filename, map_location="cpu") │
│ 80 │ │ else: │
│ │
│ C:\Users\zj136\anaconda3\envs\minigpt4\lib\site-packages\torch\serialization.py:705 in load │
│ │
│ 702 │ │ │ # If we want to actually tail call to torch.jit.load, we need to │
│ 703 │ │ │ # reset back to the original position. │
│ 704 │ │ │ orig_position = opened_file.tell() │
│ ❱ 705 │ │ │ with _open_zipfile_reader(opened_file) as opened_zipfile: │
│ 706 │ │ │ │ if _is_torchscript_zip(opened_zipfile): │
│ 707 │ │ │ │ │ warnings.warn("'torch.load' received a zip file that looks like a To │
│ 708 │ │ │ │ │ │ │ │ " dispatching to 'torch.jit.load' (call 'torch.jit.loa │
│ │
│ C:\Users\zj136\anaconda3\envs\minigpt4\lib\site-packages\torch\serialization.py:242 in init │
│ │
│ 239 │
│ 240 class _open_zipfile_reader(_opener): │
│ 241 │ def init(self, name_or_buffer) -> None: │
│ ❱ 242 │ │ super(_open_zipfile_reader, self).init(torch._C.PyTorchFileReader(name_or_bu │
│ 243 │
│ 244 │
│ 245 class _open_zipfile_writer_file(_opener):
Apr 20
'23 09:04
srtj
it looks like there are some issue with the cached weights of Q-Former. Can you maybe find the cached weight, deleted it and try again?
Q-Former
it looks like there are some issue with the cached weights of Q-Former. Can you maybe find the cached weight, deleted it and try again?
Q-former 是需要下载的吗
Apr 24
'23 02:04
srtj
删除 ~/.cache/torch/hub/checkpoints/eva_vit_g.pth 就行了