saminbj
saminbj
而且跑gemma会崩,在powershell跑ollama run gemma7b是没有问題的。我和一朋友都这样。我是win11用docker安装的。在poweshell安装试过,在wsl2下安装也试过。
解決了, docx不行,转pdf就可以了
不行,再试了下还是不行。在anythingllm上可以。
我基本跟bend2046同一情況: 1、选llama2时问两三句就会停下来,检查ollama是崩了没服务,要重启ollama。光用ollama时llama2是没有问题的。 2、用qwen或yi不会崩,但没有搜文档内容来回答。查看http://localhost:8000 是这样的: { "detail": "Not Found" } 好像chromadb没配置好
I have the same problem while running qwen:14b(7b is fine) and gemma. it seems vram out of memory which server logs showing: {"function":"update_slots","level":"INFO","line":1836,"msg":"kv cache rm [p0, end)","p0":33,"slot_id":0,"task_id":199,"tid":"7004","timestamp":1712779028} CUDA error: out...
Thank you very much for your concern. Please kindly see the following capture maybe more self-explanatory:  I found from your faq that the ip address should add /v1/, however...