[Question]: how to serve bge-reranker-v2-m3 and bge-large-zh-v1.5 in RAGFlow?
Describe your problem
我目前使用了 Dify + ragflow知识库,ragflow中带了bge-reranker-v2-m3 和 bge-large-zh-v1.5 模型 为了模型充分利用,我想让这两个模型能够对外暴露服务,使模型能够在dify中使用,请问下这里该如何去做能达到效果? @KevinHuSh 请教下
It's not supported yet. You could utilize Ollama/XInference... to server embedding model.
Is there a switch in ragflow to turn off the rerank model, because I built the model and wanted to configure it to plug in, Avoid duplicate boot model
---- 回复的原邮件 ---- | 发件人 | Kevin @.> | | 日期 | 2025年03月11日 12:00 | | 收件人 | @.> | | 抄送至 | @.>@.> | | 主题 | Re: [infiniflow/ragflow] [Question]: 正式版中:bge-reranker-v2-m3 和 bge-large-zh-v1.5 模型 如何可以对外提供模型服务? (Issue #5871) |
It's not supported yet.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
KevinHuSh left a comment (infiniflow/ragflow#5871)
It's not supported yet.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
You could depoy a slim version of docker image which has no build-in embedding and rerank models.