QAnything
QAnything copied to clipboard
希望分离掉LLM服务
Please Describe The Problem To Be Solved (Replace This Text: Please present a concise description of the problem to be addressed by this feature request. Please be clear what parts of the problem are considered to be in-scope and out-of-scope.)
(Optional): Suggest A Solution (Replace This Text: A concise description of your preferred solution. Things to address include: 希望QAnything能够像 Anything LLM 那样灵活地调用不同来源的LLM服务,如Ollama等,并建议将LLM服务与QAnything应用分开独立部署。同时,我也希望本地部署的LLM服务能支持多个UI和RAG产品,以提高资源利用效率。
- Details of the technical implementation 创建一个配置文件,用户可以在其中选择不同的LLM服务来源,如OpenAI API集成或Ollama模型。
- Tradeoffs made in design decisions 部署分离可能导致额外的设置和管理复杂性。但独立部署可以保证资源有效利用,并使用户能够灵活调整模型使用。
- Caveats and considerations for the future 设计时考虑未来可能的技术演进,如支持更多类型的LLM服务或跨平台性能优化。应定期评估配置文件的适应性,以应对可能的技术变化。 If there are multiple solutions, please present each one separately. Save comparisons for the very end.)
能否像 Anything LLM 那样,灵活地调用 Ollama 等不同 LLM 服务?考虑到要将 QAnything 自带的本地大模型移除,这样我们可以分别独立地部署LLM服务和QAnything服务。本地部署的LLM服务不仅可以支持QAnything,还可以同时供多个UI、RAG产品使用,只需额外部署一台服务器。对于QAnything将LLM服务绑定在一起的方式,我非常不赞同。
Can it be as flexible as Anything LLM when calling various LLM services, like Ollama? I suggest removing the built-in local model, so we can separately deploy LLM services and QAnything itself on different servers. The locally deployed LLM service not only supports QAnything, but also simultaneously powers multiple UIs and RAG products, which could be hosted on another server. I strongly disagree with how QAnything combines these services together.
个人认为Qanything最大的优势就是embedding模型,希望能够支持ollama,因为anythingLLM本身支持ollama,所以到时候就可以自由搭配啦
确实是,解析带表格的word文档很准确