lobe-chat
lobe-chat copied to clipboard
[Question] How can I connect the UI to text-generation-webui
🧐 问题描述 | Proposed Solution
I can see in the readme that it connects to openAI api, and text-generation webui created an api at 127.0.0.1:5000/v1 Where do I put it instead of the official api key by openai? I want to run it locally
Thanks for the help! Also, does the search agent rely on API or can it query the web locally?
📝 补充信息 | Additional Information
No response
👀 @iChristGit
Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。
The current custom service domain does not seem to support filling in the locally deployed model link, as the request to chat goes through cloud servers, and the cloud servers cannot access the local model service.
The current custom service domain does not seem to support filling in the locally deployed model link, as the request to chat goes through cloud servers, and the cloud servers cannot access the local model service.
So running my own llamacpp server is not supported? only official openai api ? I am very new so i might not understand correctly what you meant.
Currently, only support online services with OpenAI interface style. The request code can be found here: https://github.com/lobehub/lobe-chat/blob/97dd03e0d0b4305b7fa4b317d66417b25c29ae26/src/app/api/openai/createBizOpenAI/createOpenai.ts#L16 This is initiated by a cloud server and cannot be accessed by local or intranet services.
Currently, only support online services with OpenAI interface style. The request code can be found here:
https://github.com/lobehub/lobe-chat/blob/97dd03e0d0b4305b7fa4b317d66417b25c29ae26/src/app/api/openai/createBizOpenAI/createOpenai.ts#L16
This is initiated by a cloud server and cannot be accessed by local or intranet services.
As far as I understand, most UI's like text-generation-webui provide a openai compatible api, it has completion / chat / v1 etc, so its not compatible with this project? Is offline models a thing that being considered?
@iChristGit if you run a docker on local, and then add the proxy url to here, I think it will work.
Is offline models a thing that being considered?
Yes, we plan to support the offline models. please follow #151