Langchain-Chatchat icon indicating copy to clipboard operation
Langchain-Chatchat copied to clipboard

向知识库中添加文件后点击”上传文件并加载知识库“后Segmentation fault报错。

Open csmsp opened this issue 2 years ago • 4 comments
trafficstars

csmsp avatar Apr 23 '23 16:04 csmsp

运行服务后的提示如下: $ python webui.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. No compiled kernel found. Compiling kernels : C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels_parallel.c -shared -o C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantizatio n_kernels_parallel.so 'gcc' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 Compile default cpu kernel failed, using default cpu kernel code. Compiling gcc -O3 -fPIC -std=c99 C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels.c -shared -o C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels.so 'gcc' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 Compile default cpu kernel failed. Failed to load kernel. Cannot load cpu kernel, don't use quantized model on cpu. Using quantization cache Applying quantization to glm layers No sentence-transformers model found with name E:\AIproject\embedding\text2vec-large-chinese. Creating a new one with MEAN pooling. The dtype of attention mask (torch.int64) is not bool Running on local URL: http://0.0.0.0:7860

To create a public link, set share=True in launch().

一旦选择上传文档到本地知识库,服务就退出来了。

csmsp avatar Apr 24 '23 05:04 csmsp

看起来是chatglm模型没有成功加载,可以参考chatglm-6b项目首先跑通模型。

csmsp @.***>于2023年4月24日 周一13:01写道:

运行服务后的提示如下: $ python webui.py Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision. Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision. No compiled kernel found. Compiling kernels : C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels_parallel.c Compiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels_parallel.c -shared -o C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantizatio n_kernels_parallel.so 'gcc' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 Compile default cpu kernel failed, using default cpu kernel code. Compiling gcc -O3 -fPIC -std=c99 C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels.c -shared -o C:\Users\Reloading.cache\huggingface\modules\transformers_modules\chatglm-6b-int8\quantization_kernels.so 'gcc' 不是内部或外部命令,也不是可运行的程序 或批处理文件。 Compile default cpu kernel failed. Failed to load kernel. Cannot load cpu kernel, don't use quantized model on cpu. Using quantization cache Applying quantization to glm layers No sentence-transformers model found with name E:\AIproject\embedding\text2vec-large-chinese. Creating a new one with MEAN pooling. The dtype of attention mask (torch.int64) is not bool Running on local URL: http://0.0.0.0:7860 To create a public link, set share=True in launch().

一旦选择上传文档到本地知识库,服务就退出来了。

— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/161#issuecomment-1519384907, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5ETL5BGDME6DVOZMLY3XCYCKTANCNFSM6AAAAAAXIUJX2M . You are receiving this because you are subscribed to this thread.Message ID: @.***>

imClumsyPanda avatar Apr 24 '23 05:04 imClumsyPanda

llm对话是没有问题

csmsp avatar Apr 24 '23 05:04 csmsp

建议在微信群或者issue里面搜索看看,因为我自己没怎么用量化模型,看起来可能是量化模型运行过程中需要gcc支持

csmsp @.***>于2023年4月24日 周一13:09写道:

llm对话是没有问题

— Reply to this email directly, view it on GitHub https://github.com/imClumsyPanda/langchain-ChatGLM/issues/161#issuecomment-1519392254, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABLH5EW7U53YBH6RMS6YDWLXCYDJJANCNFSM6AAAAAAXIUJX2M . You are receiving this because you commented.Message ID: @.***>

imClumsyPanda avatar Apr 24 '23 05:04 imClumsyPanda