DeepSeek-VL
DeepSeek-VL copied to clipboard
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
您好! 我下载该模型搭配LLamafactory框架,在做api部署的时候,报以下错误 [2024-10-01 00:15:35,483] [INFO] [real_accelerator.py:203:get_accelerator] Setting ds_accelerator to cuda (auto detect) [INFO|configuration_utils.py:670] 2024-10-01 00:15:38,538 >> loading configuration file /mnt/ssd2/models/deepseek-vl-7b-chat/config.json Traceback (most recent call last): File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 1023, in from_pretrained config_class = CONFIG_MAPPING[config_dict["model_type"]] File "/home/ubuntu/miniconda3/envs/panc_math_vscode/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 725, in getitem raise KeyError(key) KeyError: 'multi_modality'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ubuntu/miniconda3/envs/panc_math_vscode/bin/llamafactory-cli", line 8, in multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
请问怎么修改,我的transformer=4.45.0,具体环境附上图。