iaoxuesheng
iaoxuesheng
 请问这个怎么解决

具体报错信息如下:作系统:Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.17. python版本:3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] 项目版本:v0.2.5 langchain版本:0.0.302. fastchat版本:0.2.29 当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm2-6b'] @ cpu {'device': 'cpu', 'host': '127.0.0.1', 'infer_turbo': 'vllm', 'model_path': '/home/qianlab03/rjs/Langchain-Chatchat-0.2.7/chatglm2-6b', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cpu...
### How are you running AnythingLLM? AnythingLLM desktop app ### What happened? Updating location Prisma binaries for linux builds. [21700:0331/163659.380261:ERROR:object_proxy.cc(590)] Failed to call method: org.freedesktop.portal.Settings.Read: object_path= /org/freedesktop/portal/desktop: org.freedesktop.portal.Error.NotFound: 未找到请求的设置 Prisma...
跑的就是这个代码from transformers import AutoModelForCausalLM, AutoTokenizer device = "cuda" # the device to load the model onto model = AutoModelForCausalLM.from_pretrained( "Qwen/Qwen1.5-72B-Chat", torch_dtype="auto", device_map="auto" ) tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen1.5-72B-Chat") prompt = "Give me...
chatollama_1 | ChatOllama is unable to establish a connection with http://127.0.0.1:11434, please check: chatollama_1 | 1. Is Ollama server running ? (run `ollama serve` in terminal to start the server)...
您好,感谢您贡献的项目,我有一个地方没理解,GPTQ vs [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) 这里面的对比列出的nf4等其他的都是什么意思
### Is there an existing issue for this? - [X] I have searched the existing issues ### Current Behavior Traceback (most recent call last): File "/home/qianlab03/anaconda3/envs/ceshi/lib/python3.10/site-packages/transformers/modeling_utils.py", line 530, in load_state_dict...