MiniGPT-4 icon indicating copy to clipboard operation
MiniGPT-4 copied to clipboard

[๐ŸŒŸTutorials๐ŸŒŸ] Use MiniGPT-4 in Google Colab or your computer | ๅœจๆœฌๅœฐๆˆ–่€…ColabไธŠไฝฟ็”จMiniGPT-4

Open WangRongsheng opened this issue 2 years ago โ€ข 17 comments
trafficstars

Use MiniGPT-4 in Colab

If you want to use miniGPT-4 in Google Colab, You must use GPU and you are a Google Colab Pro user, Otherwise you will not be able to use colab!

  1. I provided a Minigpt-4 weight based on PrepareVicuna.md .
  2. I provided a code in .

Use MiniGPT-4 in your computer

  1. clone repo:
https://github.com/Vision-CAIR/MiniGPT-4.git
  1. install package:
pip install -r requirements.txt

requirements.txt are stored in WangRongsheng/Use-LLMs-in-Colab .

  1. set config
Set llama_model: "wangrongsheng/MiniGPT-4-LLaMA" in minigpt4/configs/models/minigpt4.yaml
Set ckpt: 'pretrained_minigpt4.pth' in eval_configs/minigpt4_eval.yaml
  1. run minigpt-4
python demo.py --cfg-path eval_configs/minigpt4_eval.yaml 

Have good fun!

WangRongsheng avatar Apr 20 '23 18:04 WangRongsheng

wow, you are the best!

TsuTikgiau avatar Apr 20 '23 19:04 TsuTikgiau

image

It is good!

WangRongsheng avatar Apr 20 '23 21:04 WangRongsheng

่ฟ่กŒ!python demo.py --cfg-path eval_configs/minigpt4_eval.yamlๅ‡บ้”™

Initializing Chat
Downloading (โ€ฆ)solve/main/vocab.txt: 100% 232k/232k [00:00<00:00, 8.88MB/s]
Downloading (โ€ฆ)okenizer_config.json: 100% 28.0/28.0 [00:00<00:00, 4.18kB/s]
Downloading (โ€ฆ)lve/main/config.json: 100% 570/570 [00:00<00:00, 225kB/s]
Loading VIT
100% 1.89G/1.89G [00:11<00:00, 182MB/s]
Loading VIT Done
Loading Q-Former
100% 413M/413M [00:02<00:00, 187MB/s]
Loading Q-Former Done
Loading LLAMA
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ /content/MiniGPT-4/demo.py:60 in <module>                                    โ”‚
โ”‚                                                                              โ”‚
โ”‚    57 model_config = cfg.model_cfg                                           โ”‚
โ”‚    58 model_config.device_8bit = args.gpu_id                                 โ”‚
โ”‚    59 model_cls = registry.get_model_class(model_config.arch)                โ”‚
โ”‚ โฑ  60 model = model_cls.from_config(model_config).to('cuda:{}'.format(args.g โ”‚
โ”‚    61                                                                        โ”‚
โ”‚    62 vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train  โ”‚
โ”‚    63 vis_processor = registry.get_processor_class(vis_processor_cfg.name).f โ”‚
โ”‚                                                                              โ”‚
โ”‚ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:243 in from_config           โ”‚
โ”‚                                                                              โ”‚
โ”‚   240 โ”‚   โ”‚   max_txt_len = cfg.get("max_txt_len", 32)                       โ”‚
โ”‚   241 โ”‚   โ”‚   end_sym = cfg.get("end_sym", '\n')                             โ”‚
โ”‚   242 โ”‚   โ”‚                                                                  โ”‚
โ”‚ โฑ 243 โ”‚   โ”‚   model = cls(                                                   โ”‚
โ”‚   244 โ”‚   โ”‚   โ”‚   vit_model=vit_model,                                       โ”‚
โ”‚   245 โ”‚   โ”‚   โ”‚   q_former_model=q_former_model,                             โ”‚
โ”‚   246 โ”‚   โ”‚   โ”‚   img_size=img_size,                                         โ”‚
โ”‚                                                                              โ”‚
โ”‚ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:86 in __init__               โ”‚
โ”‚                                                                              โ”‚
โ”‚    83 โ”‚   โ”‚   print('Loading Q-Former Done')                                 โ”‚
โ”‚    84 โ”‚   โ”‚                                                                  โ”‚
โ”‚    85 โ”‚   โ”‚   print('Loading LLAMA')                                         โ”‚
โ”‚ โฑ  86 โ”‚   โ”‚   self.llama_tokenizer = LlamaTokenizer.from_pretrained(llama_mo โ”‚
โ”‚    87 โ”‚   โ”‚   self.llama_tokenizer.pad_token = self.llama_tokenizer.eos_toke โ”‚
โ”‚    88 โ”‚   โ”‚                                                                  โ”‚
โ”‚    89 โ”‚   โ”‚   if self.low_resource:                                          โ”‚
โ”‚                                                                              โ”‚
โ”‚ /usr/local/lib/python3.9/dist-packages/transformers/tokenization_utils_base. โ”‚
โ”‚ py:1771 in from_pretrained                                                   โ”‚
โ”‚                                                                              โ”‚
โ”‚   1768 โ”‚   โ”‚   โ”‚   โ”‚   elif is_remote_url(file_path):                        โ”‚
โ”‚   1769 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   resolved_vocab_files[file_id] = download_url(file โ”‚
โ”‚   1770 โ”‚   โ”‚   โ”‚   else:                                                     โ”‚
โ”‚ โฑ 1771 โ”‚   โ”‚   โ”‚   โ”‚   resolved_vocab_files[file_id] = cached_file(          โ”‚
โ”‚   1772 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   pretrained_model_name_or_path,                    โ”‚
โ”‚   1773 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   file_path,                                        โ”‚
โ”‚   1774 โ”‚   โ”‚   โ”‚   โ”‚   โ”‚   cache_dir=cache_dir,                              โ”‚
โ”‚                                                                              โ”‚
โ”‚ /usr/local/lib/python3.9/dist-packages/transformers/utils/hub.py:409 in      โ”‚
โ”‚ cached_file                                                                  โ”‚
โ”‚                                                                              โ”‚
โ”‚    406 โ”‚   user_agent = http_user_agent(user_agent)                          โ”‚
โ”‚    407 โ”‚   try:                                                              โ”‚
โ”‚    408 โ”‚   โ”‚   # Load from URL or cache if already cached                    โ”‚
โ”‚ โฑ  409 โ”‚   โ”‚   resolved_file = hf_hub_download(                              โ”‚
โ”‚    410 โ”‚   โ”‚   โ”‚   path_or_repo_id,                                          โ”‚
โ”‚    411 โ”‚   โ”‚   โ”‚   filename,                                                 โ”‚
โ”‚    412 โ”‚   โ”‚   โ”‚   subfolder=None if len(subfolder) == 0 else subfolder,     โ”‚
โ”‚                                                                              โ”‚
โ”‚ /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py: โ”‚
โ”‚ 112 in _inner_fn                                                             โ”‚
โ”‚                                                                              โ”‚
โ”‚   109 โ”‚   โ”‚   โ”‚   kwargs.items(),  # Kwargs values                           โ”‚
โ”‚   110 โ”‚   โ”‚   ):                                                             โ”‚
โ”‚   111 โ”‚   โ”‚   โ”‚   if arg_name in ["repo_id", "from_id", "to_id"]:            โ”‚
โ”‚ โฑ 112 โ”‚   โ”‚   โ”‚   โ”‚   validate_repo_id(arg_value)                            โ”‚
โ”‚   113 โ”‚   โ”‚   โ”‚                                                              โ”‚
โ”‚   114 โ”‚   โ”‚   โ”‚   elif arg_name == "token" and arg_value is not None:        โ”‚
โ”‚   115 โ”‚   โ”‚   โ”‚   โ”‚   has_token = True                                       โ”‚
โ”‚                                                                              โ”‚
โ”‚ /usr/local/lib/python3.9/dist-packages/huggingface_hub/utils/_validators.py: โ”‚
โ”‚ 160 in validate_repo_id                                                      โ”‚
โ”‚                                                                              โ”‚
โ”‚   157 โ”‚   โ”‚   raise HFValidationError(f"Repo id must be a string, not {type( โ”‚
โ”‚   158 โ”‚                                                                      โ”‚
โ”‚   159 โ”‚   if repo_id.count("/") > 1:                                         โ”‚
โ”‚ โฑ 160 โ”‚   โ”‚   raise HFValidationError(                                       โ”‚
โ”‚   161 โ”‚   โ”‚   โ”‚   "Repo id must be in the form 'repo_name' or 'namespace/rep โ”‚
โ”‚   162 โ”‚   โ”‚   โ”‚   f" '{repo_id}'. Use `repo_type` argument if needed."       โ”‚
โ”‚   163 โ”‚   โ”‚   )                                                              โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
HFValidationError: Repo id must be in the form 'repo_name' or 
'namespace/repo_name': '/path/to/vicuna/weights/'. Use `repo_type` argument if 
needed.

XuNing2 avatar Apr 21 '23 01:04 XuNing2

@XuNing2 Set llama_model: "wangrongsheng/MiniGPT-4-LLaMA" in minigpt4/configs/models/minigpt4.yaml

WangRongsheng avatar Apr 21 '23 02:04 WangRongsheng

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues

Loading checkpoint shards: 0% 0/3 [00:00<?, ?it/s] ๅกๅœจ่ฟ™้‡Œๅฐฑ็ป“ๆŸไบ†...

sanjikk avatar Apr 21 '23 03:04 sanjikk

@sanjikk If you want to use miniGPT-4 in Google Colab, You must use GPU and you are a Google Colab Pro user, Otherwise you will not be able to use colab!

WangRongsheng avatar Apr 21 '23 05:04 WangRongsheng

@WangRongsheng In fact, I am Pro user and use GPU.. Finally I find that I should choose High level GPU. Thanks

sanjikk avatar Apr 21 '23 09:04 sanjikk

I am a Pro user and I have used the A100. But I get a "UnpicklingError: invalid load key, '<'."

Prompt Example ###Human: <Img><ImageHere></Img> Could you describe the contents of this image for me? ###Assistant: Load BLIP2-LLM Checkpoint: pretrained_minigpt4.pth โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ /content/MiniGPT-4/demo.py:60 in โ”‚ โ”‚ โ”‚ โ”‚ 57 model_config = cfg.model_cfg โ”‚ โ”‚ 58 model_config.device_8bit = args.gpu_id โ”‚ โ”‚ 59 model_cls = registry.get_model_class(model_config.arch) โ”‚ โ”‚ โฑ 60 model = model_cls.from_config(model_config).to('cuda:{}'.format(args.g โ”‚ โ”‚ 61 โ”‚ โ”‚ 62 vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train โ”‚ โ”‚ 63 vis_processor = registry.get_processor_class(vis_processor_cfg.name).f โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:265 in from_config โ”‚ โ”‚ โ”‚ โ”‚ 262 โ”‚ โ”‚ ckpt_path = cfg.get("ckpt", "") # load weights of MiniGPT-4 โ”‚ โ”‚ 263 โ”‚ โ”‚ if ckpt_path: โ”‚ โ”‚ 264 โ”‚ โ”‚ โ”‚ print("Load BLIP2-LLM Checkpoint: {}".format(ckpt_path)) โ”‚ โ”‚ โฑ 265 โ”‚ โ”‚ โ”‚ ckpt = torch.load(ckpt_path, map_location="cpu") โ”‚ โ”‚ 266 โ”‚ โ”‚ โ”‚ msg = model.load_state_dict(ckpt['model'], strict=False) โ”‚ โ”‚ 267 โ”‚ โ”‚ โ”‚ โ”‚ 268 โ”‚ โ”‚ return model โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.9/dist-packages/torch/serialization.py:815 in load โ”‚ โ”‚ โ”‚ โ”‚ 812 โ”‚ โ”‚ โ”‚ โ”‚ return _legacy_load(opened_file, map_location, _weigh โ”‚ โ”‚ 813 โ”‚ โ”‚ โ”‚ except RuntimeError as e: โ”‚ โ”‚ 814 โ”‚ โ”‚ โ”‚ โ”‚ raise pickle.UnpicklingError(UNSAFE_MESSAGE + str(e)) โ”‚ โ”‚ โฑ 815 โ”‚ โ”‚ return _legacy_load(opened_file, map_location, pickle_module, โ”‚ โ”‚ 816 โ”‚ โ”‚ 817 โ”‚ โ”‚ 818 # Register pickling support for layout instances such as โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.9/dist-packages/torch/serialization.py:1033 in โ”‚ โ”‚ _legacy_load โ”‚ โ”‚ โ”‚ โ”‚ 1030 โ”‚ โ”‚ โ”‚ f"Received object of type "{type(f)}". Please update to โ”‚ โ”‚ 1031 โ”‚ โ”‚ โ”‚ "functionality.") โ”‚ โ”‚ 1032 โ”‚ โ”‚ โ”‚ โฑ 1033 โ”‚ magic_number = pickle_module.load(f, **pickle_load_args) โ”‚ โ”‚ 1034 โ”‚ if magic_number != MAGIC_NUMBER: โ”‚ โ”‚ 1035 โ”‚ โ”‚ raise RuntimeError("Invalid magic number; corrupt file?") โ”‚ โ”‚ 1036 โ”‚ protocol_version = pickle_module.load(f, **pickle_load_args) โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ UnpicklingError: invalid load key, '<'.

ChristianAchenbach4815 avatar Apr 21 '23 14:04 ChristianAchenbach4815

@ChristianAchenbach4815 Pleck check:

  1. Set llama_model: "wangrongsheng/MiniGPT-4-LLaMA" in minigpt4/configs/models/minigpt4.yaml
  2. Set ckpt: 'pretrained_minigpt4.pth' in eval_configs/minigpt4_eval.yaml

WangRongsheng avatar Apr 21 '23 15:04 WangRongsheng

@TsuTikgiau Hi, I update MiniGPT-4 7B in Google Colab notebook, you can enjoy it!

WangRongsheng avatar Apr 23 '23 01:04 WangRongsheng

hi, after setting llama_model: "wangrongsheng/MiniGPT-4-LLaMA" in minigpt4/configs/models/minigpt4.yaml, which llama model will it load? 13B or 7B

created-Bi avatar Apr 23 '23 09:04 created-Bi

@created-Bi It will be help you: https://colab.research.google.com/drive/1OK4kYsZphwt5DXchKkzMBjYF6jnkqh4R?usp=sharing

WangRongsheng avatar Apr 23 '23 10:04 WangRongsheng

@ChristianAchenbach4815 - The 13B model download URL is incorrect. The right URL is !wget https://huggingface.co/wangrongsheng/MiniGPT4/resolve/main/pretrained_minigpt4.pth (note "resolve/main" instead of "blob/main")

The "blob/main" URL is an HTML page, hence the error

!python demo.py --cfg-path eval_configs/minigpt4_eval.yaml --gpu-id 0

...
UnpicklingError: invalid load key, '<'

After this tiny change, I see no issue on Colab (running on A100) - Thanks @WangRongsheng ๐Ÿฅ‡

klocatelli avatar Apr 24 '23 01:04 klocatelli

hello, when I running with wangrongsheng/MiniGPT-4-LLaMA-7B, an error happened that the shape of the weight and bias in the llama_proj module in original minigpt4 mismatched(4096 vs 5120). Thus, I'm wondering if you change the shape of the weight and bias in the llama_proj module?

created-Bi avatar Apr 24 '23 03:04 created-Bi

@created-Bi Please give me more error information. I can't repeat this error.

WangRongsheng avatar Apr 24 '23 04:04 WangRongsheng

omg, this colab is GARBAGE, sorry but it so hard to use, don't commit half finished products

I know it's harsh but why on earth to use this colab we need to:

  1. clone repo
  2. edit repo to edit that files
  3. edit colab to include our repo
  4. we will receive error similar to @klocatelli (after this I just gave up and went here to shitpost)

ArtemBernatskyy avatar Apr 24 '23 22:04 ArtemBernatskyy

@ArtemBernatskyy Here are some points to clarify:

  1. There is no official release of minigpt-4 version, it is being improved, so you clone this repo in the colab.
  2. You only need to edit two key parameters and some necessary environmental installation. If you don't want to do this, you can use the minigpt-4 demo
  3. I don't understand what you are talking about.
  4. Many people have run it in this way on both colab and local computers and they have all responded well. You should debug these errors, a perfect solution is not available, perhaps you can do better and look forward to your pull requestes.

WangRongsheng avatar Apr 25 '23 01:04 WangRongsheng

@

@created-Bi Please give me more error information. I can't repeat this error.

Hi, I got the same error, here is the error information.

/usr/local/lib/python3.10/dist-packages/requests/init.py:102: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported " 2023-05-01 08:46:15.855567: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Initializing Chat Loading VIT Loading VIT Done Loading Q-Former Loading Q-Former Done Loading LLAMA

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link

/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/lib/python3.10/dist-packages/cv2/../../lib64')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:105: UserWarning: /usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/lib64-nvidia did not contain libcudart.so as expected! Searching further paths... warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//172.28.0.1'), PosixPath('http'), PosixPath('8013')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-v100-hm-2nxtjzw2zpl6c --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')} warn( CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so... Loading checkpoint shards: 100% 3/3 [02:17<00:00, 45.86s/it] Downloading (โ€ฆ)neration_config.json: 100% 137/137 [00:00<00:00, 96.5kB/s] Loading LLAMA Done Load 4 training prompts Prompt Example ###Human: <Img><ImageHere></Img> Describe this image in detail. ###Assistant: Load BLIP2-LLM Checkpoint: /content/MiniGPT-4/prerained_minigpt4_7b.pth โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ /content/MiniGPT-4/demo.py:60 in โ”‚ โ”‚ โ”‚ โ”‚ 57 model_config = cfg.model_cfg โ”‚ โ”‚ 58 model_config.device_8bit = args.gpu_id โ”‚ โ”‚ 59 model_cls = registry.get_model_class(model_config.arch) โ”‚ โ”‚ โฑ 60 model = model_cls.from_config(model_config).to('cuda:{}'.format(args.g โ”‚ โ”‚ 61 โ”‚ โ”‚ 62 vis_processor_cfg = cfg.datasets_cfg.cc_sbu_align.vis_processor.train โ”‚ โ”‚ 63 vis_processor = registry.get_processor_class(vis_processor_cfg.name).f โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:266 in from_config โ”‚ โ”‚ โ”‚ โ”‚ 263 โ”‚ โ”‚ if ckpt_path: โ”‚ โ”‚ 264 โ”‚ โ”‚ โ”‚ print("Load BLIP2-LLM Checkpoint: {}".format(ckpt_path)) โ”‚ โ”‚ 265 โ”‚ โ”‚ โ”‚ ckpt = torch.load(ckpt_path, map_location="cpu") โ”‚ โ”‚ โฑ 266 โ”‚ โ”‚ โ”‚ msg = model.load_state_dict(ckpt['model'], strict=False) โ”‚ โ”‚ 267 โ”‚ โ”‚ โ”‚ โ”‚ 268 โ”‚ โ”‚ return model โ”‚ โ”‚ 269 โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:2041 in โ”‚ โ”‚ load_state_dict โ”‚ โ”‚ โ”‚ โ”‚ 2038 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ ', '.join('"{}"'.format(k) for k in missing_k โ”‚ โ”‚ 2039 โ”‚ โ”‚ โ”‚ โ”‚ 2040 โ”‚ โ”‚ if len(error_msgs) > 0: โ”‚ โ”‚ โฑ 2041 โ”‚ โ”‚ โ”‚ raise RuntimeError('Error(s) in loading state_dict for {} โ”‚ โ”‚ 2042 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ self.class.name, "\n\t".join(e โ”‚ โ”‚ 2043 โ”‚ โ”‚ return _IncompatibleKeys(missing_keys, unexpected_keys) โ”‚ โ”‚ 2044 โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ RuntimeError: Error(s) in loading state_dict for MiniGPT4: size mismatch for llama_proj.weight: copying a param with shape torch.Size([4096, 768]) from checkpoint, the shape in current model is torch.Size([5120, 768]). size mismatch for llama_proj.bias: copying a param with shape torch.Size([4096]) from checkpoint, the shape in current model is torch.Size([5120]).

kuoyenlo avatar May 01 '23 09:05 kuoyenlo

@WangRongsheng

Have you tried to run the finetune(stage 2) on colab? Such this command !torchrun --nproc-per-node 1 train.py --cfg-path train_configs/minigpt4_stage2_finetune.yaml

If I turn on the "low_resource: True" on the config minigpt4_stage2_finetune.yaml, The following issue about gpu and cpu happend>>>

/usr/local/lib/python3.10/dist-packages/requests/init.py:102: RequestsDependencyWarning: urllib3 (1.26.15) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version! warnings.warn("urllib3 ({}) or chardet ({})/charset_normalizer ({}) doesn't match a supported " 2023-05-01 12:30:28.549952: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT | distributed init (rank 0, world 1): env:// 2023-05-01 12:30:30,840 [INFO] ===== Running Parameters ===== 2023-05-01 12:30:30,841 [INFO] { "amp": true, "batch_size_eval": 6, "batch_size_train": 6, "device": "cuda", "dist_backend": "nccl", "dist_url": "env://", "distributed": true, "evaluate": false, "gpu": 0, "init_lr": 3e-05, "iters_per_epoch": 200, "lr_sched": "linear_warmup_cosine_lr", "max_epoch": 5, "min_lr": 1e-05, "num_workers": 2, "output_dir": "output/minigpt4_stage2_finetune", "rank": 0, "resume_ckpt_path": null, "seed": 42, "task": "image_text_pretrain", "train_splits": [ "train" ], "warmup_lr": 1e-06, "warmup_steps": 20, "weight_decay": 0.05, "world_size": 1 } 2023-05-01 12:30:30,841 [INFO] ====== Dataset Attributes ====== 2023-05-01 12:30:30,841 [INFO] ======== cc_sbu_align ======= 2023-05-01 12:30:30,841 [INFO] { "build_info": { "storage": "/content/cc_sbu_align/cc_sbu_align/" }, "data_type": "images", "text_processor": { "train": { "name": "blip_caption" } }, "vis_processor": { "train": { "image_size": 224, "name": "blip2_image_train" } } } 2023-05-01 12:30:30,841 [INFO] ====== Model Attributes ====== 2023-05-01 12:30:30,842 [INFO] { "arch": "mini_gpt4", "ckpt": "/content/MiniGPT-4/prerained_minigpt4_7b.pth", "drop_path_rate": 0, "end_sym": "###", "freeze_qformer": true, "freeze_vit": true, "image_size": 224, "llama_model": "wangrongsheng/MiniGPT-4-LLaMA-7B", "low_resource": true, "max_txt_len": 160, "model_type": "pretrain_vicuna", "num_query_token": 32, "prompt": "", "prompt_path": "prompts/alignment.txt", "prompt_template": "###Human: {} ###Assistant: ", "use_grad_checkpoint": false, "vit_precision": "fp16" } 2023-05-01 12:30:30,842 [INFO] Building datasets... Loading VIT 2023-05-01 12:30:57,246 [INFO] freeze vision encoder Loading VIT Done Loading Q-Former 2023-05-01 12:31:02,726 [INFO] load checkpoint from https://storage.googleapis.com/sfr-vision-language-research/LAVIS/models/BLIP2/blip2_pretrained_flant5xxl.pth 2023-05-01 12:31:02,733 [INFO] freeze Qformer Loading Q-Former Done Loading LLAMA

===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues For effortless bug reporting copy-paste your error into this form: https://docs.google.com/forms/d/e/1FAIpQLScPB8emS3Thkp66nvqwmjTEgxp8Y9ufuWTzFyr9kJ5AoI47dQ/viewform?usp=sf_link

/usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/usr/local/lib/python3.10/dist-packages/cv2/../../lib64')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:105: UserWarning: /usr/local/lib/python3.10/dist-packages/cv2/../../lib64:/usr/lib64-nvidia did not contain libcudart.so as expected! Searching further paths... warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('8013'), PosixPath('//172.28.0.1'), PosixPath('http')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https'), PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-v100-hm-2nxtjzw2zpl6c --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//ipykernel.pylab.backend_inline'), PosixPath('module')} warn( /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/paths.py:27: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/tmp/torchelastic__pzc3ueu/none_x0s6uvw_/attempt_0/0/error.json')} warn( CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching /usr/local/cuda/lib64... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 7.0 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118_nocublaslt.so... Loading checkpoint shards: 100% 2/2 [01:11<00:00, 35.64s/it] Loading LLAMA Done Load 4 training prompts Prompt Example ###Human: <Img><ImageHere></Img> Describe this image in detail. ###Assistant: Load BLIP2-LLM Checkpoint: /content/MiniGPT-4/prerained_minigpt4_7b.pth 2023-05-01 12:32:15,737 [INFO] Start training 2023-05-01 12:32:16,586 [INFO] dataset_ratios not specified, datasets will be concatenated (map-style datasets) or chained (webdataset.DataPipeline). 2023-05-01 12:32:16,586 [INFO] Loaded 3439 records for train split from the dataset. module.llama_proj.weight module.llama_proj.bias 2023-05-01 12:32:16,609 [INFO] number of trainable parameters: 3149824 2023-05-01 12:32:16,610 [INFO] Start training epoch 0, 200 iters per inner epoch. โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Traceback (most recent call last) โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ โ”‚ /content/MiniGPT-4/train.py:103 in โ”‚ โ”‚ โ”‚ โ”‚ 100 โ”‚ โ”‚ 101 โ”‚ โ”‚ 102 if name == "main": โ”‚ โ”‚ โฑ 103 โ”‚ main() โ”‚ โ”‚ 104 โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/train.py:99 in main โ”‚ โ”‚ โ”‚ โ”‚ 96 โ”‚ runner = get_runner_class(cfg)( โ”‚ โ”‚ 97 โ”‚ โ”‚ cfg=cfg, job_id=job_id, task=task, model=model, datasets=datas โ”‚ โ”‚ 98 โ”‚ ) โ”‚ โ”‚ โฑ 99 โ”‚ runner.train() โ”‚ โ”‚ 100 โ”‚ โ”‚ 101 โ”‚ โ”‚ 102 if name == "main": โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/runners/runner_base.py:378 in train โ”‚ โ”‚ โ”‚ โ”‚ 375 โ”‚ โ”‚ โ”‚ # training phase โ”‚ โ”‚ 376 โ”‚ โ”‚ โ”‚ if not self.evaluate_only: โ”‚ โ”‚ 377 โ”‚ โ”‚ โ”‚ โ”‚ logging.info("Start training") โ”‚ โ”‚ โฑ 378 โ”‚ โ”‚ โ”‚ โ”‚ train_stats = self.train_epoch(cur_epoch) โ”‚ โ”‚ 379 โ”‚ โ”‚ โ”‚ โ”‚ self.log_stats(split_name="train", stats=train_stats) โ”‚ โ”‚ 380 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 381 โ”‚ โ”‚ โ”‚ # evaluation phase โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/runners/runner_base.py:438 in train_epoch โ”‚ โ”‚ โ”‚ โ”‚ 435 โ”‚ โ”‚ # train โ”‚ โ”‚ 436 โ”‚ โ”‚ self.model.train() โ”‚ โ”‚ 437 โ”‚ โ”‚ โ”‚ โ”‚ โฑ 438 โ”‚ โ”‚ return self.task.train_epoch( โ”‚ โ”‚ 439 โ”‚ โ”‚ โ”‚ epoch=epoch, โ”‚ โ”‚ 440 โ”‚ โ”‚ โ”‚ model=self.model, โ”‚ โ”‚ 441 โ”‚ โ”‚ โ”‚ data_loader=self.train_loader, โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/tasks/base_task.py:114 in train_epoch โ”‚ โ”‚ โ”‚ โ”‚ 111 โ”‚ โ”‚ log_freq=50, โ”‚ โ”‚ 112 โ”‚ โ”‚ accum_grad_iters=1, โ”‚ โ”‚ 113 โ”‚ ): โ”‚ โ”‚ โฑ 114 โ”‚ โ”‚ return self._train_inner_loop( โ”‚ โ”‚ 115 โ”‚ โ”‚ โ”‚ epoch=epoch, โ”‚ โ”‚ 116 โ”‚ โ”‚ โ”‚ iters_per_epoch=lr_scheduler.iters_per_epoch, โ”‚ โ”‚ 117 โ”‚ โ”‚ โ”‚ model=model, โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/tasks/base_task.py:219 in _train_inner_loop โ”‚ โ”‚ โ”‚ โ”‚ 216 โ”‚ โ”‚ โ”‚ lr_scheduler.step(cur_epoch=inner_epoch, cur_step=i) โ”‚ โ”‚ 217 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 218 โ”‚ โ”‚ โ”‚ with torch.cuda.amp.autocast(enabled=use_amp): โ”‚ โ”‚ โฑ 219 โ”‚ โ”‚ โ”‚ โ”‚ loss = self.train_step(model=model, samples=samples) โ”‚ โ”‚ 220 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 221 โ”‚ โ”‚ โ”‚ # after_train_step() โ”‚ โ”‚ 222 โ”‚ โ”‚ โ”‚ if use_amp: โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/tasks/base_task.py:68 in train_step โ”‚ โ”‚ โ”‚ โ”‚ 65 โ”‚ โ”‚ return datasets โ”‚ โ”‚ 66 โ”‚ โ”‚ โ”‚ 67 โ”‚ def train_step(self, model, samples): โ”‚ โ”‚ โฑ 68 โ”‚ โ”‚ loss = model(samples)["loss"] โ”‚ โ”‚ 69 โ”‚ โ”‚ return loss โ”‚ โ”‚ 70 โ”‚ โ”‚ โ”‚ 71 โ”‚ def valid_step(self, model, samples): โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in โ”‚ โ”‚ _call_impl โ”‚ โ”‚ โ”‚ โ”‚ 1498 โ”‚ โ”‚ if not (self._backward_hooks or self._backward_pre_hooks or s โ”‚ โ”‚ 1499 โ”‚ โ”‚ โ”‚ โ”‚ or _global_backward_pre_hooks or _global_backward_hoo โ”‚ โ”‚ 1500 โ”‚ โ”‚ โ”‚ โ”‚ or _global_forward_hooks or _global_forward_pre_hooks โ”‚ โ”‚ โฑ 1501 โ”‚ โ”‚ โ”‚ return forward_call(*args, **kwargs) โ”‚ โ”‚ 1502 โ”‚ โ”‚ # Do not call functions when jit is used โ”‚ โ”‚ 1503 โ”‚ โ”‚ full_backward_hooks, non_full_backward_hooks = [], [] โ”‚ โ”‚ 1504 โ”‚ โ”‚ backward_pre_hooks = [] โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py:115 โ”‚ โ”‚ 6 in forward โ”‚ โ”‚ โ”‚ โ”‚ 1153 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ is_joined_rank=False โ”‚ โ”‚ 1154 โ”‚ โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 1155 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โฑ 1156 โ”‚ โ”‚ โ”‚ output = self._run_ddp_forward(*inputs, **kwargs) โ”‚ โ”‚ 1157 โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ 1158 โ”‚ โ”‚ โ”‚ # sync params according to location (before/after forward โ”‚ โ”‚ 1159 โ”‚ โ”‚ โ”‚ # specified as part of hook, if hook was specified. โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/nn/parallel/distributed.py:111 โ”‚ โ”‚ 0 in _run_ddp_forward โ”‚ โ”‚ โ”‚ โ”‚ 1107 โ”‚ โ”‚ โ”‚ โ”‚ self.use_side_stream_for_tensor_copies, โ”‚ โ”‚ 1108 โ”‚ โ”‚ โ”‚ ) โ”‚ โ”‚ 1109 โ”‚ โ”‚ โ”‚ with self._inside_ddp_forward(): โ”‚ โ”‚ โฑ 1110 โ”‚ โ”‚ โ”‚ โ”‚ return module_to_run(*inputs[0], **kwargs[0]) # type โ”‚ โ”‚ 1111 โ”‚ โ”‚ else: โ”‚ โ”‚ 1112 โ”‚ โ”‚ โ”‚ with self._inside_ddp_forward(): โ”‚ โ”‚ 1113 โ”‚ โ”‚ โ”‚ โ”‚ return module_to_run(*inputs, **kwargs) โ”‚ โ”‚ โ”‚ โ”‚ /usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py:1501 in โ”‚ โ”‚ _call_impl โ”‚ โ”‚ โ”‚ โ”‚ 1498 โ”‚ โ”‚ if not (self._backward_hooks or self._backward_pre_hooks or s โ”‚ โ”‚ 1499 โ”‚ โ”‚ โ”‚ โ”‚ or _global_backward_pre_hooks or _global_backward_hoo โ”‚ โ”‚ 1500 โ”‚ โ”‚ โ”‚ โ”‚ or _global_forward_hooks or global_forward_pre_hooks โ”‚ โ”‚ โฑ 1501 โ”‚ โ”‚ โ”‚ return forward_call(*args, **kwargs) โ”‚ โ”‚ 1502 โ”‚ โ”‚ # Do not call functions when jit is used โ”‚ โ”‚ 1503 โ”‚ โ”‚ full_backward_hooks, non_full_backward_hooks = [], [] โ”‚ โ”‚ 1504 โ”‚ โ”‚ backward_pre_hooks = [] โ”‚ โ”‚ โ”‚ โ”‚ /content/MiniGPT-4/minigpt4/models/mini_gpt4.py:209 in forward โ”‚ โ”‚ โ”‚ โ”‚ 206 โ”‚ โ”‚ โ”‚ โ”‚ 207 โ”‚ โ”‚ to_regress_embeds = self.llama_model.model.embed_tokens(to_reg โ”‚ โ”‚ 208 โ”‚ โ”‚ inputs_embeds = torch.cat([bos_embeds, img_embeds, to_regress โ”‚ โ”‚ โฑ 209 โ”‚ โ”‚ attention_mask = torch.cat([atts_bos, atts_img, to_regress_tok โ”‚ โ”‚ 210 โ”‚ โ”‚ โ”‚ โ”‚ 211 โ”‚ โ”‚ with self.maybe_autocast(): โ”‚ โ”‚ 212 โ”‚ โ”‚ โ”‚ outputs = self.llama_model( โ”‚ โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument tensors in method wrapper_CUDA_cat) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 75154) of binary: /usr/bin/python3 Traceback (most recent call last): File "/usr/local/bin/torchrun", line 8, in sys.exit(main()) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/elastic/multiprocessing/errors/init.py", line 346, in wrapper return f(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 794, in main run(args) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/run.py", line 785, in run elastic_launch( File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 134, in call return launch_agent(self._config, self._entrypoint, list(args)) File "/usr/local/lib/python3.10/dist-packages/torch/distributed/launcher/api.py", line 250, in launch_agent raise ChildFailedError( torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train.py FAILED

Failures: <NO_OTHER_FAILURES>

Root Cause (first observed failure): [0]: time : 2023-05-01_12:32:56 host : f4e4433e4c5e rank : 0 (local_rank: 0) exitcode : 1 (pid: 75154) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html

kuoyenlo avatar May 01 '23 12:05 kuoyenlo

why # Vicuna llama_model: "wangrongsheng/MiniGPT-4-LLaMA" , this model need to connect Internet When I run locally ??

ZhuJD-China avatar May 04 '23 01:05 ZhuJD-China

why # Vicuna llama_model: "wangrongsheng/MiniGPT-4-LLaMA" , this model need to connect Internet When I run locally ??

Model weights are downloaded automatically, so you must be online.

WangRongsheng avatar May 04 '23 04:05 WangRongsheng

Thank you for the awesome work! One question, how can I change the download location for transformer models?

zhongpeixiang avatar May 11 '23 12:05 zhongpeixiang

ๆ„Ÿ่ฐข็ฒพๅฝฉๅˆ†ไบซ๏ผŒๅทฒ็ป่ท‘่ตทๆฅไบ†ใ€‚ๅฆๅค–๏ผŒๆœ‰ไบ†่ฟ™ไธชๆˆ‘ๆ˜ฏไธๆ˜ฏไนŸๅฏไปฅ้ƒจ็ฝฒไธ€ไธชๅ•็‹ฌ็š„Vicuna๏ผŸ

ddholiday avatar May 15 '23 03:05 ddholiday

can you share the code on how to use miniGPT4 on colab without gradio interface? Thank you!

bakachan19 avatar May 22 '23 12:05 bakachan19

can you share the code on how to use miniGPT4 on colab without gradio interface? Thank you! there are a demo you can try https://colab.research.google.com/drive/1VUzWoaGQoEx6OxgcRD742EbMpNlhAPHM?usp=sharing

youyuanrsq avatar May 26 '23 03:05 youyuanrsq

Dear @youyuanrsq, Thank you!

After setting the llama_model and ckpt parameters it works!

  • Set llama_model: "wangrongsheng/MiniGPT-4-LLaMA-7B" in minigpt4/configs/models/minigpt4.yaml
  • Set ckpt: 'prerained_minigpt4_7b.pth' in eval_configs/minigpt4_eval.yaml

bakachan19 avatar May 26 '23 08:05 bakachan19

image @WangRongsheng git lfs pullๆƒ้‡็š„ๆ—ถๅ€™ไผšๆŠฅ่ฟ™ไธช้”™

autosquid avatar May 31 '23 07:05 autosquid

can you share the code on how to use miniGPT4-V2 on colab without gradio interface? Thank you!

Yuancheng-Xu avatar Dec 03 '23 03:12 Yuancheng-Xu