lobe-chat icon indicating copy to clipboard operation
lobe-chat copied to clipboard

[Request] Embedded Ollama within lobechat

Open LeonXu260 opened this issue 2 months ago • 5 comments

🥰 Feature Description

Currently, the Ollama model works by installing ollama separately. https://lobehub.com/docs/usage/providers/ollama

🧐 Proposed Solution

Embed Ollama model within the lobechat docker image or provide instructions on how to install Ollama with a separate docker images.

📝 Additional Information

No response

LeonXu260 avatar Apr 22 '24 03:04 LeonXu260

👀 @LeonXu260

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

lobehubbot avatar Apr 22 '24 03:04 lobehubbot

Ollama 的运行依赖于较大的显存,而 LobeChat 是一个纯前端的 Chatbot,如果有必要,他们应该分开部署。如果仅仅是本地部署, Ollama不应该运行在 Docker 内而是使用其官方的启动器。

MapleEve avatar Apr 23 '24 10:04 MapleEve

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Ollama relies on large graphics memory to run, and LobeChat is a pure front-end Chatbot. If necessary, they should be deployed separately. For local deployment only, Ollama should not run inside Docker but use its official launcher.

lobehubbot avatar Apr 23 '24 10:04 lobehubbot

我感觉可能的方式是提供一个docker compose的编排文件

arvinxx avatar Apr 23 '24 10:04 arvinxx

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


I think the possible way is to provide a docker compose orchestration file

lobehubbot avatar Apr 23 '24 10:04 lobehubbot