[Question]: Can't add reranker which is set by Ollama
Describe your problem
我使用了slim模型,不包含emmbeding和reranker。而后在本机使用ollama pull linux6200/bge-reranker-v2-m3下载了reranker模型,发现在web界面添加的时候报错,key("ollama"),需要ollama run再添加对吧,url是http://127.0.0.1:11434吗?一直报错,看到issues说现在还不支持ollama的reranker吗?
We intend to create an international community, so we encourage using English for communication.
Before adding the models in RAGFlow, you can test if the embedding or reranker model can be access. Then check why it can't be connected by RAGFlow.
Do you have any reference about supporting rerank model by Ollama?
Do you have any reference about supporting rerank model by Ollama?
no, i don't know how to test that if the embedding or reranker model can be access. After i do that "ollama run linux6200/bge-reranker-v2-m3",how can i know the url to the ragFlow? ([http://127.0.0.1:11434 is wrong!!)
https://ragflow.io/docs/dev/supported_models#Xinference In the doc file as above, ragFlow can use reranker model via Xinference, and not supported for ollama reranker.
我也是这个问题
What about http://172.17.0.1:11434
I have the same problem with ollama linux6200/bge-reranker-v2-m3 or zyw0605688/bge-reranker-v2-m3. I pull in Ollama with no problem but when I configure RAGFlow add LLM -> Type : rerank, I get error KeyError('Ollama') when Ok in RAGFlow web ui.
debug console log
Traceback (most recent call last):
File "/ragflow/.venv/lib/python3.10/site-packages/flask/app.py", line 880, in full_dispatch_request
rv = self.dispatch_request()
File "/ragflow/.venv/lib/python3.10/site-packages/flask/app.py", line 865, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) # type: ignore[no-any-return]
File "/ragflow/.venv/lib/python3.10/site-packages/flask_login/utils.py", line 290, in decorated_view
return current_app.ensure_sync(func)(*args, **kwargs)
File "/ragflow/api/utils/api_utils.py", line 170, in decorated_function
return func(*_args, **_kwargs)
File "/ragflow/api/apps/llm_app.py", line 238, in add_llm
mdl = RerankModel[factory](
KeyError: 'Ollama'
2025-02-19 16:05:54,788 INFO 2814 172.18.0.3 - - [19/Feb/2025 16:05:54] "POST /v1/llm/add_llm HTTP/1.1" 200 -
++ I would like to try with RAGFlow jinaai/jina-embeddings-v3 jinaai/jina-reranker-v2-base-multilingual but not in the ollama library. Do you know a best way to run localy or use as OpenWebUI direct download from HuggingFace ?
Kind regards, David.
For both embedding and re-ranking 1
For both embedding and re-ranking 1
Excuse me, but I'm looking for a way to configure RAGFlow so that I can test and use embedding+rerank locally with either Ollama or vLLM, for example. Your link point to an HF article for using a docker container created with NVIDIA Tools and then with curl queries. But not directly in RAGFlow or to configure RAGFlow with this, if I'm not mistaken. I can try to tinker with this solution and link it to RAGFlow but if you know of a best practice tuto thank you in advance, Kind regards.
https://ragflow.io/docs/dev/supported_models#Xinference In the doc file as above, ragFlow can use reranker model via Xinference, and not supported for ollama reranker.
Have you solved the problrm? I also met the same problem.