[Feature Request]: is there any way to use vllm llm service in ragflow?
Is there an existing issue for the same feature request?
- [X] I have checked the existing issues.
Is your feature request related to a problem?
No response
Describe the feature you'd like
now ragflow supports xinference,ollama to integrate llm service, but since vllm can boost serving in production environ, i just want to know is there a way to integrate vllm service api to ragflow?
Describe implementation you've considered
No response
Documentation, adoption, use case
No response
Additional information
No response
Same question here, is vLLM supported now or planned?
@qiufengyuyi @kevinbaby0222 Thanks for your suggestion, and apologies for the delayed response! ⏳🙏
Our product now supports model embedding for Ollama, Xinference, and vLLM — this should cover the feature you were looking for. 🤖✨
Please feel free to close this feature. If it remains open, we’ll include it in our upcoming round of issue cleanup. 🧹
Thanks again for your constructive feedback — we truly appreciate it! 💡🚀