Langchain-Chatchat
Langchain-Chatchat copied to clipboard
上传的本地知识文件后再次上传不能显示,只显示成功了一个,别的上传成功后再次刷新就没了
您好,项目有很大启发,感谢~
不过出现了这个问题,想问下是不是我什么操作导致的?或者是现有的一些问题吗
请问具体报错信息是?
我感觉没有看到报错信息好像,我复制了一段:
Input length of input_ids is 12010, but max_length
is set to 10000. This can lead to unexpected behavior. You should consider increasing max_new_tokens
.
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers
before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Explicitly passing a revision
is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers
before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
Explicitly passing a revision
is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using tokenizers
before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
^MLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]^MLoading checkpoint shards: 12%|█▎ | 1/8 [00:01<00:09, 1.30s/it]^MLoading checkpoint shards: 25%|██%||
▌ | 2/8 [00:02<00:08, 1.37s/it]^MLoading checkpoint shards: 38%|███▊ | 3/8 [00:04<00:07, 1.43s/it]^MLoading checkpoint shards: 50%|████%| | 4/8 [00:05<00
0:06, 1.52s/it]^MLoading checkpoint shards: 62%|██████▎ | 5/8 [00:07<00:04, 1.44s/it]^MLoading checkpoint shards: 75%|██████ ▌ | 6/8 [00:08<00:02, 1.40s/it]^^
MLoading checkpoint shards: 88%|████████▊ | 7/8 [00:09<00:01, 1.24s/it]^MLoading checkpoint shards: 100%|████████s: 1| 8/8 [00:10<00:00, 1.11s/it]^MLoading checc
kpoint shards: 100%|██████████| 8/8 [00:10<00:00, 1.28s/it]10<00:00,
No sentence-transformers model found with name /opt/langchain-ChatGLM-master/text2vec-large-chinese. Creating a new one with MEAN pooling.
感觉前端后端都是正常的,就是不显示(我只有第一次传成功的文件刷新后都会显示,别的上次成功后刷新了就没了
应该是文本长度超了,这一版本开始出现这个问题,我也在看如何解决,可以先调低topk和chunk size作为临时解决办法
yi zhao @.***>于2023年5月5日 周五12:00写道:
感觉前端后端都是正常的,就是不显示(我只有第一次传成功的文件刷新后都会显示,别的上次成功后刷新了就没了
— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/239#issuecomment-1535666434, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5EXUZWXCDBQVN7SJXSDXER3M5ANCNFSM6AAAAAAXWS53QM . You are receiving this because you commented.Message ID: @.***>
收到。请问chunk size是说的web_ui.py的这个值吗 VECTOR_SEARCH_TOP_K = 6
最新代码中都移动到了configs/model_config中
yi zhao @.***>于2023年5月5日 周五14:34写道:
收到。请问chunk size是说的web_ui.py的这个值吗 VECTOR_SEARCH_TOP_K = 6
— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/239#issuecomment-1535779006, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5ER2HUGAWWRJLGAAO23XESNO7ANCNFSM6AAAAAAXWS53QM . You are receiving this because you commented.Message ID: @.***>
@zhaoyiCC 解决了吗?我也遇到同样的问题