KAG icon indicating copy to clipboard operation
KAG copied to clipboard

ollama配置问题

Open 2314254971 opened this issue 11 months ago • 2 comments

Search before asking

  • [X] I had searched in the issues and found no similar issues.

Operating system information

Windows

What happened

问题:设置ollama生成模型在聊天时会报错: 执行失败pemja.core.PythonException: <class 'tenacity.RetryError'>: <Future at 0x7f8d69e3fc70 state=finished raised RuntimeError> at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:94) at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:67) at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:64) at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor.execute(default_lf_executor.py:239) at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_lf(default_lf_executor.py:204) at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_chunk_answer(default_lf_executor.py:154) at /openspg_venv/lib/python3.8/site-packages/kag/solver/retriever/impl/default_chunk_retrieval.recall_docs(default_chunk_retrieval.py:425)

How to reproduce

我的yaml配置: openie_llm: &openie_llm base_url: http://localhost:11434/ model: qwen2_0.5b_instruct:latest type: ollama

chat_llm: &chat_llm base_url: http://localhost:11434/ model: qwen2_0.5b_instruct:latest type: ollama

vectorize_model: &vectorize_model api_key: empty base_url: http://localhost:11434/v1/ model: bge-m3:latest #qwen2_0.5b_instruct:latest type: openai vector_dimensions: 1024 vectorizer: *vectorize_model

ollama版本0.5.6

Are you willing to submit PR?

  • [ ] Yes I am willing to submit a PR!

2314254971 avatar Jan 16 '25 06:01 2314254971

change the url ,like http://host.docker.internal:11434

BBC-9527 avatar Jan 16 '25 07:01 BBC-9527

host.docker.internal

我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错 image image

2314254971 avatar Jan 16 '25 08:01 2314254971

host.docker.internal

我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错 image image

baseurl should add prefix of http://

Image

caszkgui avatar Jan 16 '25 14:01 caszkgui

vectorize_model: &vectorize_model api_key: empty base_url: http://127.0.0.1:11434/v1 model: bge-m3:latest type: openai vector_dimensions: 1024 vectorizer: *vectorize_model

以上配置在开发模式配置中可以创建知识库。 但是在产品模式中总是提示unknown error <class 'RuntimeError'>: invalid vectorizer config: Connection error.

hanwsf avatar Jan 19 '25 23:01 hanwsf

host.docker.internal

我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错 image image

baseurl should add prefix of http://

Image

您好,我按照您的说法,加上了前缀

Image 但是问答还是有概率报错,图片如下

Image

Image

2314254971 avatar Jan 22 '25 08:01 2314254971

https://openspg.yuque.com/ndx6g9/docs/iu5cok24efl1z2nc 已经解决。

docker exec -it ...进入容器,查得网关地址为172.20.0.1. curl 172.20.0.1:11434/v1,回复是Ollama is running. 不要使用ollama类型本地模型。 在maas类型下增加模型。 http://172.20.0.1:11434/v1

---end---

hanwsf avatar Jan 31 '25 13:01 hanwsf

@hanwsf I am having the same issue , I wonder if you succeded to solve it , if so , please tell me how ??

hamzattEr avatar Feb 20 '25 09:02 hamzattEr

???

werruww avatar Jun 06 '25 23:06 werruww

It seems that the chat model and embedding model cannot be accessed in openspg-server container. You can refer to OpenSPG FAQ to get detail info:

Image

caszkgui avatar Aug 16 '25 01:08 caszkgui