ZhuRonghua
ZhuRonghua
您好,在本地运行langchain-ChatGLM过程中,环境及依赖的包都已经满足条件,但是运行webui.py,报错如下(运行cli_demo.py报错类似),请问是哪里出了错呢?盼望您的回复,谢谢! Traceback (most recent call last): File "webui.py", line 1, in import gradio as gr File "E:\Anaconda\envs\py3.8\lib\site-packages\gradio_init_.py", line 3, in import gradio.components as components File "E:\Anaconda\envs\py3.8\lib\site-packages\gradio\components.py", line 26, in import...
(mychatGLM) PS D:\Users\admin3\zrh\langchain-ChatGLM> python cli_demo.py Explicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision....
您好,在使用luhua/chinese_pretrain_mrc_roberta_wwm_ext_large模型进行本地知识库的问答时,精确度较高,就是因为本地知识库太大导致模型答案返回时间较长(32G的显存用时40S+),请问除了提高硬件设备的性能,还有其他方法或者建议来加速模型的计算速度么?盼回复,谢谢!