Billy Bao
Billy Bao
@lll-Dragon Oh, I thought you were saying 0.22.0, I will take a look at this.
@lll-Dragon 1. The issue: the “Searching” state never ends. This likely happens because the selected model (qwen-32b) is too large. It consumes a huge amount of CPU, causing the system...
This behavior isn’t really RagFlow-specific. RagFlow just formats the prompt into messages and forwards them to the model, and we don’t treat Ollama models differently from other backends. In my...
@lll-Dragon Please try updating your Ollama version. A similar issue in #11301 was fixed after upgrading Ollama. It’s probably not a RagFlow problem, since me and my colleagues rarely encounter...
@theRainight We’ll need to look into the code path to understand why this behavior occurs.
1. try vpn 2. try to put the code below in docker engine ``` { "builder": { "gc": { "defaultKeepStorage": "20GB", "enabled": true } }, "experimental": false, "registry-mirrors": [ "https://image.cloudlayer.icu",...
To clarify, the list and add operations apply to datasets, while RagFlow refers to the entire service. These are two different concepts, so it seems there may be some confusion...
you can raise a feature request for this.
The LM Studio API endpoint follows the OpenAI-compatible format (/v1/chat/completions, /v1/completions, /v1/embeddings), which supports chat, vision, and embedding models. However, LM Studio does not implement a dedicated /v1/rerank endpoint or...