Langchain-Chatchat
Langchain-Chatchat copied to clipboard
chatyuan无法使用
问题描述 / Problem Description chatyuan无法使用
复现问题的步骤 / Steps to Reproduce 修改configs/model_config.py LLM_MODEL = "chatyuan"
预期的结果 / Expected Result 正常启动
实际结果 / Actual Result
INFO 2023-05-26 16:51:44,155-1d:
loading model config
llm device: cuda
embedding device: cuda
dir: /home/chinajuanbob/work/langchain-ChatGLM
flagging username: 7a85c8402bd94e54b056d15671e67806
===================================BUG REPORT===================================
Welcome to bitsandbytes. For bug reports, please run
python -m bitsandbytes
and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues
================================================================================
bin /home/chinajuanbob/work/langchain-ChatGLM/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda120.so
CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths...
CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so
CUDA SETUP: Highest compute capability among GPUs detected: 8.6
CUDA SETUP: Detected CUDA version 120
CUDA SETUP: Loading binary /home/chinajuanbob/work/langchain-ChatGLM/venv/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda120.so...
Downloading (…)lve/main/config.json: 100%|████████████████████████████████████████████████| 848/848 [00:00<00:00, 3.62MB/s]
Loading ClueAI/ChatYuan-large-v2...
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /home/chinajuanbob/work/langchain-ChatGLM/webui.py:253 in <module> │
│ │
│ 250 │
│ 251 args_dict = vars(args) │
│ 252 shared.loaderCheckPoint = LoaderCheckPoint(args_dict) │
│ ❱ 253 llm_model_ins = shared.loaderLLM() │
│ 254 llm_model_ins.set_history_len(LLM_HISTORY_LEN) │
│ 255 │
│ 256 model_status = init_model(llm_model=llm_model_ins) │
│ │
│ /home/chinajuanbob/work/langchain-ChatGLM/models/shared.py:39 in loaderLLM │
│ │
│ 36 │ │
│ 37 │ loaderCheckPoint.model_path = llm_model_info["local_model_path"] │
│ 38 │ │
│ ❱ 39 │ loaderCheckPoint.reload_model() │
│ 40 │ │
│ 41 │ provides_class = getattr(sys.modules['models'], llm_model_info['provides']) │
│ 42 │ modelInsLLM = provides_class(checkPoint=loaderCheckPoint) │
│ │
│ /home/chinajuanbob/work/langchain-ChatGLM/models/loader/loader.py:365 in reload_model │
│ │
│ 362 │ │ │ except Exception as e: │
│ 363 │ │ │ │ print("加载PrefixEncoder config.json失败") │
│ 364 │ │ │
│ ❱ 365 │ │ self.model, self.tokenizer = self._load_model(self.model_name) │
│ 366 │ │ │
│ 367 │ │ if self.lora: │
│ 368 │ │ │ self._add_lora_to_model([self.lora]) │
│ │
│ /home/chinajuanbob/work/langchain-ChatGLM/models/loader/loader.py:115 in _load_model │
│ │
│ 112 │ │ │ │ num_gpus = torch.cuda.device_count() │
│ 113 │ │ │ │ if num_gpus < 2 and self.device_map is None: │
│ 114 │ │ │ │ │ model = ( │
│ ❱ 115 │ │ │ │ │ │ LoaderClass.from_pretrained(checkpoint, │
│ 116 │ │ │ │ │ │ │ │ │ │ │ │ │ config=self.model_config, │
│ 117 │ │ │ │ │ │ │ │ │ │ │ │ │ torch_dtype=torch.bfloat16 if self.b │
│ 118 │ │ │ │ │ │ │ │ │ │ │ │ │ trust_remote_code=True) │
│ │
│ /home/chinajuanbob/work/langchain-ChatGLM/venv/lib/python3.10/site-packages/transformers/models/ │
│ auto/auto_factory.py:470 in from_pretrained │
│ │
│ 467 │ │ │ return model_class.from_pretrained( │
│ 468 │ │ │ │ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, │
│ 469 │ │ │ ) │
│ ❱ 470 │ │ raise ValueError( │
│ 471 │ │ │ f"Unrecognized configuration class {config.__class__} for this kind of AutoM │
│ 472 │ │ │ f"Model type should be one of {', '.join(c.__name__ for c in cls._model_mapp │
│ 473 │ │ ) │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of
AutoModel: AutoModelForCausalLM.
Model type should be one of BartConfig, BertConfig, BertGenerationConfig, BigBirdConfig, BigBirdPegasusConfig,
BioGptConfig, BlenderbotConfig, BlenderbotSmallConfig, BloomConfig, CamembertConfig, CodeGenConfig, CpmAntConfig,
CTRLConfig, Data2VecTextConfig, ElectraConfig, ErnieConfig, GitConfig, GPT2Config, GPT2Config, GPTBigCodeConfig,
GPTNeoConfig, GPTNeoXConfig, GPTNeoXJapaneseConfig, GPTJConfig, LlamaConfig, MarianConfig, MBartConfig, MegaConfig,
MegatronBertConfig, MvpConfig, OpenLlamaConfig, OpenAIGPTConfig, OPTConfig, PegasusConfig, PLBartConfig, ProphetNetConfig,
QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig,
RwkvConfig, Speech2Text2Config, TransfoXLConfig, TrOCRConfig, XGLMConfig, XLMConfig, XLMProphetNetConfig, XLMRobertaConfig,
XLMRobertaXLConfig, XLNetConfig, XmodConfig.
环境信息 / Environment Information
- langchain-ChatGLM 版本/commit 号:f095576
- 是否使用 Docker 部署(是/否):否
- 使用的模型(ChatGLM-6B / ClueAI/ChatYuan-large-v2 等):ChatYuan-large-v2
- 使用的 Embedding 模型(GanymedeNil/text2vec-large-chinese 等): GanymedeNil/text2vec-large-chinese
- 操作系统及版本 / Operating system and version: ubuntu22.04
- Python 版本 / Python version: 3.10
- 其他相关环境信息 / Other relevant environment information:
附加信息 / Additional Information 无
同样的错误,我也遇到了
我也一样
我也一样