Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

一直停在文本加载处

Open bookwoods opened this issue 2 years ago • 5 comments
trafficstars

一直停在文本加载处 (chatglm) D:\langchain-ChatGLM-master>python cli_demo.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.49it/s] No sentence-transformers model found with name C:\Users\MSN/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling. No sentence-transformers model found with name C:\Users\MSN/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling. Input your local knowledge file path 请输入本地知识文件路径:D:\langchain-ChatGLM-master\content\state_of_the_search.txt

卡在最后一步一个多小时了,请问这正常吗? python3.8,cuda11.7,环境与项目相同,怎么破

bookwoods avatar Apr 15 '23 14:04 bookwoods

感觉不太正常,请问先存占用情况怎么样?

bookwoods @.***>于2023年4月15日 周六22:29写道:

一直停在文本加载处 (chatglm) D:\langchain-ChatGLM-master>python cli_demo.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.49it/s] No sentence-transformers model found with name C:\Users\MSN/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling. No sentence-transformers model found with name C:\Users\MSN/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling. Input your local knowledge file path 请输入本地知识文件路径:D:\langchain-ChatGLM-master\content\state_of_the_search.txt

卡在最后一步一个多小时了,请问这正常吗? python3.8,cuda11.7,环境与项目相同,怎么破

— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/108, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5EV7RDGZFC743SMP3OTXBKWFVANCNFSM6AAAAAAW7OMQXY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

imClumsyPanda avatar Apr 15 '23 14:04 imClumsyPanda

感觉不太正常,请问先存占用情况怎么样? bookwoods @.>于2023年4月15日 周六22:29写道: 一直停在文本加载处 (chatglm) D:\langchain-ChatGLM-master>python cli_demo.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 8/8 [00:05<00:00, 1.49it/s] No sentence-transformers model found with name C:\Users\MSN/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling. No sentence-transformers model found with name C:\Users\MSN/.cache\torch\sentence_transformers\GanymedeNil_text2vec-large-chinese. Creating a new one with MEAN pooling. Input your local knowledge file path 请输入本地知识文件路径:D:\langchain-ChatGLM-master\content\state_of_the_search.txt 卡在最后一步一个多小时了,请问这正常吗? python3.8,cuda11.7,环境与项目相同,怎么破 — Reply to this email directly, view it on GitHub <#108>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5EV7RDGZFC743SMP3OTXBKWFVANCNFSM6AAAAAAW7OMQXY . You are receiving this because you are subscribed to this thread.Message ID: @.>

显卡24G的4090显存占用12G

bookwoods avatar Apr 16 '23 01:04 bookwoods

请问现在解决了吗

imClumsyPanda avatar Apr 17 '23 04:04 imClumsyPanda

还没有,放弃了,再试试其他办法

bookwoods avatar Apr 17 '23 11:04 bookwoods

如果使用默认的6b+text2vec模型,建议使用显存至少15GB的显卡,详细请见 README.md 中硬件配置相关内容

12G可以考虑使用chatglm-6b-int4模型+text2vec

bookwoods @.***>于2023年4月17日 周一19:21写道:

还没有,放弃了,再试试其他办法

— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/108#issuecomment-1511159993, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5ET5ETW6LSRBCKM7ILTXBURT7ANCNFSM6AAAAAAW7OMQXY . You are receiving this because you commented.Message ID: @.***>

imClumsyPanda avatar Apr 17 '23 11:04 imClumsyPanda