windows 下cpu部署将模型clone后报错
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
python web_demo.py
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.
Explicitly passing a revision is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.
Traceback (most recent call last):
File "web_demo.py", line 5, in
Expected Behavior
No response
Steps To Reproduce
1
Environment
- OS: windwos10
- Python: 3.8
Anything else?
No response
是否是你clone的目录含有空格?
并没有,说起来好奇怪,我windows下部署的不是这个就那个报错,但是arch下部署的就能运行
可以尝试打印kernel_file,也许是一个很奇怪的路径导致os出错。但是正常情况下如果没有传入kernel_file的参数,kernel_file默认为“”,os.path.exists将返回Flase。
请问这个具体在哪里改?
你可以win运行?我安装环境都过不去
请问这个具体在哪里改?
将仓库clone下来,然后修改quantization.py中的CPUKernel。加载模型时请填写clone下来的仓库的本地路径。
是将kernels = Kernel改成kernels = CPUKernel吗?
是将kernels = Kernel改成kernels = CPUKernel吗?
这个修改应该是针对在mac上加载cpm_kernels失败的问题,对这个问题没有帮助。
目前的问题可能是kernel_file路径有问题,导致os报错。您可以把他打印后发出来看一下:
print(kernel_file) # 添加这一行
if (not kernel_file) or (not os.path.exists(kernel_file)):
print("No compiled kernel found.")
请问这个具体在哪里改?
将仓库clone下来,然后修改quantization.py中的CPUKernel。加载模型时请填写clone下来的仓库的本地路径。
感谢大佬,虽然没试试能不能有用,但我重新拉取了6b和6b-int4后发现将demo里的THUDM/chatglm-6b改成./chatglm-6b就能运行了