ollama配置问题
Search before asking
- [X] I had searched in the issues and found no similar issues.
Operating system information
Windows
What happened
问题:设置ollama生成模型在聊天时会报错: 执行失败pemja.core.PythonException: <class 'tenacity.RetryError'>: <Future at 0x7f8d69e3fc70 state=finished raised RuntimeError> at /openspg_venv/lib/python3.8/site-packages/kag/solver/main_solver.invoke(main_solver.py:94) at /openspg_venv/lib/python3.8/site-packages/kag/solver/logic/solver_pipeline.run(solver_pipeline.py:67) at /openspg_venv/lib/python3.8/site-packages/kag/solver/implementation/default_reasoner.reason(default_reasoner.py:64) at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor.execute(default_lf_executor.py:239) at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_lf(default_lf_executor.py:204) at /openspg_venv/lib/python3.8/site-packages/kag/solver/execute/default_lf_executor._execute_chunk_answer(default_lf_executor.py:154) at /openspg_venv/lib/python3.8/site-packages/kag/solver/retriever/impl/default_chunk_retrieval.recall_docs(default_chunk_retrieval.py:425)
How to reproduce
我的yaml配置: openie_llm: &openie_llm base_url: http://localhost:11434/ model: qwen2_0.5b_instruct:latest type: ollama
chat_llm: &chat_llm base_url: http://localhost:11434/ model: qwen2_0.5b_instruct:latest type: ollama
vectorize_model: &vectorize_model api_key: empty base_url: http://localhost:11434/v1/ model: bge-m3:latest #qwen2_0.5b_instruct:latest type: openai vector_dimensions: 1024 vectorizer: *vectorize_model
ollama版本0.5.6
Are you willing to submit PR?
- [ ] Yes I am willing to submit a PR!
change the url ,like http://host.docker.internal:11434
host.docker.internal
我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错
host.docker.internal
我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错
![]()
baseurl should add prefix of http://
vectorize_model: &vectorize_model api_key: empty base_url: http://127.0.0.1:11434/v1 model: bge-m3:latest type: openai vector_dimensions: 1024 vectorizer: *vectorize_model
以上配置在开发模式配置中可以创建知识库。 但是在产品模式中总是提示unknown error <class 'RuntimeError'>: invalid vectorizer config: Connection error.
host.docker.internal
我在网站上配置时使用的就是http://host.docker.internal:11434/ ,报同样的错
![]()
baseurl should add prefix of
http://
您好,我按照您的说法,加上了前缀
但是问答还是有概率报错,图片如下
https://openspg.yuque.com/ndx6g9/docs/iu5cok24efl1z2nc 已经解决。
docker exec -it ...进入容器,查得网关地址为172.20.0.1. curl 172.20.0.1:11434/v1,回复是Ollama is running. 不要使用ollama类型本地模型。 在maas类型下增加模型。 http://172.20.0.1:11434/v1
---end---
@hanwsf I am having the same issue , I wonder if you succeded to solve it , if so , please tell me how ??
???
It seems that the chat model and embedding model cannot be accessed in openspg-server container. You can refer to OpenSPG FAQ to get detail info:
