Visual-Chinese-LLaMA-Alpaca
Visual-Chinese-LLaMA-Alpaca copied to clipboard
text_generation_webui部署的几个问题
1.FileNotFoundError: [Errno 2] No such file or directory: './models/visualcla_merged-7b/pytorch_model.bin' 对于合并权重的情况 需要cp visualcla/pytorch_model.bin models/visualcla_merged-7b/ 不知道这样对不对
2.OSError: Can't load the configuration of './models/visualcla_merged-7b/vision_encoder'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './models/visualcla_merged-7b/vision_encoder' is the correct path to a directory containing a config.json file 对于合并权重的情况 需要cp -r ./visualcla/vision_encoder/ ./models/visualcla_merged-7b/ 不知道这样对不对
3.OSError: ./models/visualcla_merged-7b does not appear to have a file named preprocessor_config.json. Checkout 'https://huggingface.co/./models/visualcla_merged-7b/main' for available files. 对于合并权重的情况 cp ./visualcla/preprocessor_config.json models/visualcla_merged-7b/ 不知道这样对不对
4.KeyError: 'visual_resampler_config' 以上操作完了之后,重新运行server.py $ python server.py --model=visualcla_merged-7b --multimodal-pipeline=visualcla-7b --chat --settings=settings-visualcla.yaml --share --load-in-8bit 2023-07-27 09:31:45 WARNING:The gradio "share link" feature uses a proprietary executable to create a reverse tunnel. Use it with care. 2023-07-27 09:31:47 INFO:Loading settings from settings-visualcla.yaml... 2023-07-27 09:31:47 INFO:Loading visualcla_merged-7b... 2023-07-27 09:38:36 WARNING:models/visualcla_merged-7b/special_tokens_map.json is different from the original LlamaTokenizer file. It is either customized or outdated. 2023-07-27 09:38:36 INFO:Loaded the model in 408.25 seconds.
2023-07-27 09:38:36 INFO:Loading the extension "multimodal"...
2023-07-27 09:38:36 INFO:VisualCLA - Loading CLIP from ./models/visualcla_merged-7b/vision_encoder as torch.float32 on cuda:0...
2023-07-27 09:38:38 INFO:VisualCLA - Loading visual resampler from ./models/visualcla_merged-7b/ as torch.float32 on cuda:0...
Traceback (most recent call last):
File "/home/yibo/text-generation-webui-Visual-Chinese-LLaMA-Alpaca/server.py", line 1179, in
配置文件config.json如下 more models/visualcla_merged-7b/config.json { "_name_or_path": "chinese-alpaca-plus-7b/", "architectures": [ "LlamaForCausalLM" ], "bos_token_id": 1, "eos_token_id": 2, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 11008, "max_position_embeddings": 2048, "model_type": "llama", "num_attention_heads": 32, "num_hidden_layers": 32, "pad_token_id": 0, "rms_norm_eps": 1e-06, "tie_word_embeddings": false, "torch_dtype": "float16", "transformers_version": "4.30.2", "use_cache": true, "vocab_size": 49954 }
请帮忙看看,谢谢
关于1、2、3:对于合并后的权重,需要将文本端权重text_encoder
目录(即你的visualcla/text_encoder
目录)复制到text-generation-webui目录的models
下并命名为visualcla_merged-7b
,然后将tokenizer相关的四个文件复制到models/visualcla_merged-7b
下,最后修改text-generation-webui的settings-visualcla.yaml
中visualcla_merged_model的值为合并后权重的目录(即你的visualcla
目录)。
models/visualcla_merged-7b
目录下只包含文本端相关文件,不需要把vision_encodel和resampler等部分放进去。其余部分只需要在settings-visualcla.yaml
中设置好目录就可以读取,看上面的报错可能是这里设置的目录不对
嗯 是我配置错了 visualcla_merged_model : [Path/For/Merged_Model] 之前这里路径我配置成models/visualcla_merged-7b,应该是Merged_Model的路径 谢谢