Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

No sentence-transformers model found with name GanymedeNil/text2vec-large-chinese. Creating a new one with MEAN pooling

Open alfgo opened this issue 1 year ago • 1 comments

ls GanymedeNil/text2vec-large-chinese config.json eval_results.txt pytorch_model.bin special_tokens_map.json tokenizer.json tokenizer_config.json vocab.txt

明明目录下有文件,奇怪了

alfgo avatar Apr 21 '23 09:04 alfgo

Same question

onesecurity avatar Apr 21 '23 13:04 onesecurity

因为这个是huggingface的模型,不符合sentence transformer的规范,所以warn一下,但是结果依然可以用

612twilight avatar Apr 22 '23 01:04 612twilight

是个warning 不是error 不影响正常使用

imClumsyPanda avatar Apr 22 '23 02:04 imClumsyPanda

因为这个是huggingface的模型,不符合sentence transformer的规范,所以warn一下,但是结果依然可以用

直接复制一个pooling文件过来就不会警告了,但是其实不影响使用

HIT-Owen avatar Apr 25 '23 09:04 HIT-Owen

因为这个是huggingface的模型,不符合sentence transformer的规范,所以warn一下,但是结果依然可以用

直接复制一个pooling文件过来就不会警告了,但是其实不影响使用

复制pooling和module.json过来会报错,会找你要sentence_xlnet_config.json 不管他就好了。

Sowhat007 avatar May 24 '23 06:05 Sowhat007

(base) root@autodl-container-2fe511b7ae-ac3733a5:~/autodl-tmp/langchain-ChatGLM# python webui.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00,  1.47s/it]
No sentence-transformers model found with name /root/autodl-tmp/text2vec-large-chinese. Creating a new one with MEAN pooling.
模型已成功加载,可以开始对话,或从右侧选择模式后开始对话
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /root/autodl-tmp/langchain-ChatGLM/webui.py:266 in <module>                                      │
│                                                                                                  │
│   263 │   │   │   │   │   │     label="向量匹配 top k",                                          │
│   264 │   │   │   │   │   │     interactive=True)                                                │
│   265 │   │   load_model_button = gr.Button("重新加载模型")                                      │
│ ❱ 266 │   load_model_button.click(reinit_model,                                                  │
│   267 │   │   │   │   │   │   │   show_progress=True,                                            │
│   268 │   │   │   │   │   │   │   inputs=[llm_model, embedding_model, llm_history_len, use_ptu   │
│   269 │   │   │   │   │   │   │   outputs=chatbot                                                │
│                                                                                                  │
│ /root/miniconda3/lib/python3.8/site-packages/gradio/blocks.py:1200 in __exit__                   │
│                                                                                                  │
│   1197 │   │   │   Context.root_block = None                                                     │
│   1198 │   │   else:                                                                             │
│   1199 │   │   │   self.parent.children.extend(self.children)                                    │
│ ❱ 1200 │   │   self.config = self.get_config_file()                                              │
│   1201 │   │   self.app = routes.App.create_app(self)                                            │
│   1202 │   │   self.progress_tracking = any(block_fn.tracks_progress for block_fn in self.fns)   │
│   1203 │   │   self.exited = True                                                                │
│                                                                                                  │
│ /root/miniconda3/lib/python3.8/site-packages/gradio/blocks.py:1176 in get_config_file            │
│                                                                                                  │
│   1173 │   │   │   │   assert isinstance(block, serializing.Serializable)                        │
│   1174 │   │   │   │   block_config["serializer"] = serializer                                   │
│   1175 │   │   │   │   block_config["info"] = {                                                  │
│ ❱ 1176 │   │   │   │   │   "input": list(block.input_api_info()),  # type: ignore                │
│   1177 │   │   │   │   │   "output": list(block.output_api_info()),  # type: ignore              │
│   1178 │   │   │   │   }                                                                         │
│   1179 │   │   │   config["components"].append(block_config)                                     │
│                                                                                                  │
│ /root/miniconda3/lib/python3.8/site-packages/gradio_client/serializing.py:41 in input_api_info   │
│                                                                                                  │
│    38 │   # For backwards compatibility                                                          │
│    39 │   def input_api_info(self) -> tuple[str, str]:                                           │
│    40 │   │   api_info = self.api_info()                                                         │
│ ❱  41 │   │   return (api_info["serialized_input"][0], api_info["serialized_input"][1])          │
│    42 │                                                                                          │
│    43 │   # For backwards compatibility                                                          │
│    44 │   def output_api_info(self) -> tuple[str, str]:                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'serialized_input'
(base) root@autodl-container-2fe511b7ae-ac3733a5:~/autodl-tmp/langchain-ChatGLM# 

sk142857 avatar Jun 02 '23 05:06 sk142857

(base) root@autodl-container-2fe511b7ae-ac3733a5:~/autodl-tmp/langchain-ChatGLM# python webui.py
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8/8 [00:11<00:00,  1.47s/it]
No sentence-transformers model found with name /root/autodl-tmp/text2vec-large-chinese. Creating a new one with MEAN pooling.
模型已成功加载,可以开始对话,或从右侧选择模式后开始对话
╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ /root/autodl-tmp/langchain-ChatGLM/webui.py:266 in <module>                                      │
│                                                                                                  │
│   263 │   │   │   │   │   │     label="向量匹配 top k",                                          │
│   264 │   │   │   │   │   │     interactive=True)                                                │
│   265 │   │   load_model_button = gr.Button("重新加载模型")                                      │
│ ❱ 266 │   load_model_button.click(reinit_model,                                                  │
│   267 │   │   │   │   │   │   │   show_progress=True,                                            │
│   268 │   │   │   │   │   │   │   inputs=[llm_model, embedding_model, llm_history_len, use_ptu   │
│   269 │   │   │   │   │   │   │   outputs=chatbot                                                │
│                                                                                                  │
│ /root/miniconda3/lib/python3.8/site-packages/gradio/blocks.py:1200 in __exit__                   │
│                                                                                                  │
│   1197 │   │   │   Context.root_block = None                                                     │
│   1198 │   │   else:                                                                             │
│   1199 │   │   │   self.parent.children.extend(self.children)                                    │
│ ❱ 1200 │   │   self.config = self.get_config_file()                                              │
│   1201 │   │   self.app = routes.App.create_app(self)                                            │
│   1202 │   │   self.progress_tracking = any(block_fn.tracks_progress for block_fn in self.fns)   │
│   1203 │   │   self.exited = True                                                                │
│                                                                                                  │
│ /root/miniconda3/lib/python3.8/site-packages/gradio/blocks.py:1176 in get_config_file            │
│                                                                                                  │
│   1173 │   │   │   │   assert isinstance(block, serializing.Serializable)                        │
│   1174 │   │   │   │   block_config["serializer"] = serializer                                   │
│   1175 │   │   │   │   block_config["info"] = {                                                  │
│ ❱ 1176 │   │   │   │   │   "input": list(block.input_api_info()),  # type: ignore                │
│   1177 │   │   │   │   │   "output": list(block.output_api_info()),  # type: ignore              │
│   1178 │   │   │   │   }                                                                         │
│   1179 │   │   │   config["components"].append(block_config)                                     │
│                                                                                                  │
│ /root/miniconda3/lib/python3.8/site-packages/gradio_client/serializing.py:41 in input_api_info   │
│                                                                                                  │
│    38 │   # For backwards compatibility                                                          │
│    39 │   def input_api_info(self) -> tuple[str, str]:                                           │
│    40 │   │   api_info = self.api_info()                                                         │
│ ❱  41 │   │   return (api_info["serialized_input"][0], api_info["serialized_input"][1])          │
│    42 │                                                                                          │
│    43 │   # For backwards compatibility                                                          │
│    44 │   def output_api_info(self) -> tuple[str, str]:                                          │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
KeyError: 'serialized_input'
(base) root@autodl-container-2fe511b7ae-ac3733a5:~/autodl-tmp/langchain-ChatGLM# 

按照requirements.txt更新gradio版本

imClumsyPanda avatar Jun 02 '23 06:06 imClumsyPanda

是个warning 不是error 不影响正常使用

通过单独加载 ChatGLM和text2vec-large-chinese 发现有这个警告 text2vec-large-chinese模型并没有加载到显存

wgq1134711420 avatar Jul 13 '23 12:07 wgq1134711420

因为这个是huggingface的模型,不符合sentence transformer的规范,所以warn一下,但是结果依然可以用

非常感谢

141forever avatar Jul 10 '24 13:07 141forever