KAG icon indicating copy to clipboard operation
KAG copied to clipboard

win的

Open daxian-pang opened this issue 11 months ago • 3 comments

Search before asking

  • [X] I had searched in the issues and found no similar feature requirement.

Description

文档中并没有提及开发者模式中ollama是如何使用的,在文档的编辑项目配置中并没有 [llm] type = ollama base_url = http://localhost:11434/ model = qwen2.5:3b这种格式,以及embeddin也是一样,具体如何进行离线使用?还有在产品模式中使用ollama进行创建知识库时会超时,导致无法创建,看起来这并不是我电脑性能的原因,因为他们并没有跑满性能 250b77b9-cc7c-4eaf-817e-bde83c467120

Are you willing to submit PR?

  • [X] Yes I am willing to submit a PR!

daxian-pang avatar Jan 08 '25 12:01 daxian-pang

Did you read the following documents? 生成(chat)模型配置 表示(embedding)模型

xionghuaidong avatar Jan 09 '25 02:01 xionghuaidong

您是否阅读了以下文档? 生成(chat)模型配置 表示(embedding)模型

我阅读了但是0.6的文档中格式是这个 openie_llm: &openie_llm api_key: key base_url: https://api.deepseek.com model: deepseek-chat type: maas

chat_llm: &chat_llm api_key: key base_url: https://api.deepseek.com model: deepseek-chat type: maas

vectorize_model: &vectorize_model api_key: key base_url: https://api.siliconflow.cn/v1/ model: BAAI/bge-m3 type: openai vector_dimensions: 1024 vectorizer: *vectorize_model

log: level: INFO

project: biz_scene: default host_addr: http://127.0.0.1:8887 id: "1" language: en namespace: TwoWikiTest

像以下格式不是已经被废弃了吗? [llm] type = ollama base_url = http://localhost:11434/ model = qwen2.5:3b 因为这跟之前的文档不太一样我不太能理解这个意思,产品模式的话我用本地模型还是会报超时,这个有没有很好的解决办法呢?

daxian-pang avatar Jan 09 '25 03:01 daxian-pang

您是否阅读了以下文档? 生成(chat)模型配置 表示(embedding)模型

我阅读了但是0.6的文档中格式是这个 openie_llm: &openie_llm api_key: key base_url: https://api.deepseek.com model: deepseek-chat type: maas

chat_llm: &chat_llm api_key: key base_url: https://api.deepseek.com model: deepseek-chat type: maas

vectorize_model: &vectorize_model api_key: key base_url: https://api.siliconflow.cn/v1/ model: BAAI/bge-m3 type: openai vector_dimensions: 1024 vectorizer: *vectorize_model

log: level: INFO

project: biz_scene: default host_addr: http://127.0.0.1:8887 id: "1" language: en namespace: TwoWikiTest

像以下格式不是已经被废弃了吗? [llm] type = ollama base_url = http://localhost:11434/ model = qwen2.5:3b 因为这跟之前的文档不太一样我不太能理解这个意思,产品模式的话我用本地模型还是会报超时,这个有没有很好的解决办法呢?

The format of config files have been update to use YAML in version 0.6.

Timeout can be caused by various reasons. You should check the availability of the generative model and representional model with curl first like I replied in this issue https://github.com/OpenSPG/KAG/issues/271.

xionghuaidong avatar Jan 16 '25 03:01 xionghuaidong