InternGPT icon indicating copy to clipboard operation
InternGPT copied to clipboard

2 reference locations of the model

Open atxcowboy opened this issue 1 year ago • 5 comments

Hello, just a heads up warning: I think it's looking for the model in two different locations

  • model_zoo/llama\7B\
  • model_zoo\llama_7B_hf If I copy the model to both locations the demo server comes up.
(ichat) E:\ai\InternGPT>python -u app.py --load "HuskyVQA_cuda:0,SegmentAnything_cuda:0,ImageOCRRecognition_cuda:0" --port 3456
[05/17 20:42:38] bark.generation WARNING: torch version does not support flash attention. You will get faster inference speed by upgrade torch to newest nightly version.
Initializing InternGPT, load_dict={'HuskyVQA': 'cuda:0', 'SegmentAnything': 'cuda:0', 'ImageOCRRecognition': 'cuda:0'}
Für das Windows-Subsystem für Linux wurden keine Distributionen installiert.
Distributionen zur Installation finden Sie im Microsoft Store:
https://aka.ms/wslstore
Traceback (most recent call last):
  File "app.py", line 221, in <module>
    bot = ConversationBot(load_dict=load_dict)
  File "E:\ai\InternGPT\iGPT\controllers\ConversationBot.py", line 141, in __init__
    self.models[class_name] = globals()[class_name](device=device)
  File "E:\ai\InternGPT\iGPT\models\husky.py", line 368, in __init__
    download_if_not_exists(base_path="model_zoo/llama",
  File "E:\ai\InternGPT\iGPT\models\husky.py", line 351, in download_if_not_exists
    write_model(
  File "E:\ai\InternGPT\iGPT\models\husky_src\convert_llama_weights_to_hf.py", line 93, in write_model
    params = read_json(os.path.join(input_base_path, "params.json"))
  File "E:\ai\InternGPT\iGPT\models\husky_src\convert_llama_weights_to_hf.py", line 79, in read_json
    with open(path, "r") as f:
FileNotFoundError: [Errno 2] No such file or directory: 'model_zoo/llama\\7B\\params.json'

(ichat) E:\ai\InternGPT>python -u app.py --load "HuskyVQA_cuda:0,SegmentAnything_cuda:0,ImageOCRRecognition_cuda:0" --port 3456
[05/17 20:44:16] bark.generation WARNING: torch version does not support flash attention. You will get faster inference speed by upgrade torch to newest nightly version.
Initializing InternGPT, load_dict={'HuskyVQA': 'cuda:0', 'SegmentAnything': 'cuda:0', 'ImageOCRRecognition': 'cuda:0'}
Loading base model
Traceback (most recent call last):
  File "app.py", line 221, in <module>
    bot = ConversationBot(load_dict=load_dict)
  File "E:\ai\InternGPT\iGPT\controllers\ConversationBot.py", line 141, in __init__
    self.models[class_name] = globals()[class_name](device=device)
  File "E:\ai\InternGPT\iGPT\models\husky.py", line 368, in __init__
    download_if_not_exists(base_path="model_zoo/llama",
  File "E:\ai\InternGPT\iGPT\models\husky.py", line 359, in download_if_not_exists
    apply_delta(output_dir, new_path, delta_path)
  File "E:\ai\InternGPT\iGPT\models\husky_src\load_ckpt.py", line 11, in apply_delta
    base = AutoModelForCausalLM.from_pretrained(base_model_path, torch_dtype=torch.float16, low_cpu_mem_usage=True)
  File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\models\auto\auto_factory.py", line 441, in from_pretrained
    config, kwargs = AutoConfig.from_pretrained(
  File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\models\auto\configuration_auto.py", line 916, in from_pretrained
    config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\configuration_utils.py", line 573, in get_config_dict
    config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\configuration_utils.py", line 628, in _get_config_dict
    resolved_config_file = cached_file(
  File "C:\Users\Sasch\.conda\envs\ichat\lib\site-packages\transformers\utils\hub.py", line 380, in cached_file
    raise EnvironmentError(
OSError: model_zoo\llama_7B_hf does not appear to have a file named config.json. Checkout 'https://huggingface.co/model_zoo\llama_7B_hf/None' for available files.

atxcowboy avatar May 17 '23 18:05 atxcowboy