Flowise icon indicating copy to clipboard operation
Flowise copied to clipboard

[FEATURE] Add OPENAI_API_BASE to UI to enable usage oobabooga openai plugin

Open TheMasterFX opened this issue 1 year ago • 7 comments

Describe the feature you'd like oobabooga Text-generation UI has a plugin which emulates the openAI API. But in the UI of Flowise it is not possible to set the URL of OpenAI. Setting the OPENAI_API_BASE via .env doesn't seem to work

TheMasterFX avatar Jun 03 '23 13:06 TheMasterFX

Do you have the link to the plugin?

HenryHengZJ avatar Jun 04 '23 00:06 HenryHengZJ

Do you have the link to the plugin?

It is build-in already: https://github.com/oobabooga/text-generation-webui/tree/main/extensions/openai

When I use Langchain in python I just have to set OPENAI_API_KEY and OPENAI_API_BASE environment variables OPENAI_API_KEY="dummy" OPENAI_API_BASE="http://127.0.0.1:5001/v1"

TheMasterFX avatar Jun 04 '23 12:06 TheMasterFX

Instead of running oobabooga to serve your local LLM, you can use LocalAI instead. It gives you an OpenAI compatible API based on whatever model you choose to run. If you set up LocalAI from the repo here: https://github.com/go-skynet/LocalAI

Then you can use the Existing ChatLocalAI node in flowise: See example here https://github.com/go-skynet/LocalAI/tree/master/examples/flowise

heresandyboy avatar Jun 08 '23 07:06 heresandyboy

@TheMasterFX with this PR merged - https://github.com/FlowiseAI/Flowise/pull/264 you can now specify a base path

HenryHengZJ avatar Jun 08 '23 08:06 HenryHengZJ

Instead of running oobabooga to serve your local LLM, you can use LocalAI instead. It gives you an OpenAI compatible API based on whatever model you choose to run. If you set up LocalAI from the repo here: https://github.com/go-skynet/LocalAI

As far as I can tell LocalAI doesn't seem to support GPTQ models (on GPU)

TheMasterFX avatar Jun 08 '23 16:06 TheMasterFX

As far as I can tell LocalAI doesn't seem to support GPTQ models (on GPU)

Looking at the readme and build instructions on their github it looks like they have GPU support now, if still experiential, I'll try it out myself soon.

heresandyboy avatar Jun 08 '23 16:06 heresandyboy

I am attempting to use the LocalAI module with the oobabooga backend. It seems like both are intended to work as openai drop in replacements so in theory I should be able to use the LocalAI node with any drop in openai replacement, right? Well.. maybe not because I can't get it working. It may be that the LocalLLM node only needs to be modified slightly to get it to support other backends (or I am doing something wrong).

sam1am avatar Jul 05 '23 20:07 sam1am

Start oobabooga with call python server.py --auto-devices --chat --wbits 4 --groupsize 128 --api --listen --extension openai Then enter the API url in the model in the UI http://127.0.0.1:5001/v1 However a LLMs might have different Keywords like "### Instruction:" or "User:"

TheMasterFX avatar Jul 05 '23 20:07 TheMasterFX

However a LLMs might have different Keywords like "### Instruction:" or "User:"

Where do find these keywords? Any template-suggestions for some known LLMs? 🙏🏽

ill-yes avatar Aug 15 '23 13:08 ill-yes

However a LLMs might have different Keywords like "### Instruction:" or "User:"

Where do find these keywords? Any template-suggestions for some known LLMs? 🙏🏽

This might be a good source: https://github.com/oobabooga/text-generation-webui/tree/main/instruction-templates

TheMasterFX avatar Aug 15 '23 19:08 TheMasterFX

This might be a good source: https://github.com/oobabooga/text-generation-webui/tree/main/instruction-templates

Thanks! Do you suggest any models + templates for flowise? I've tried a bunch of LLMs but none of them work as expected.

ill-yes avatar Aug 16 '23 07:08 ill-yes