Flowise icon indicating copy to clipboard operation
Flowise copied to clipboard

[FEATURE] Using LM studio with local LLM models as an endpoint server and OpenAIChatModel

Open Mayorc1978 opened this issue 1 year ago • 3 comments

Describe the feature you'd like I would like a field to specify the base_url in OpeAIChatModel to be able to use the LM Studio feature that turns Local LLM models into a server endpoint with OpenAI-compatible API. Additional context Considering the generic support of most AI tools to OpenAI API, it would be useful to be able to use a local server endpoint, so that the API usage could converge, especially cause desktop computers have limited RAM/GPU power, and thus being able to serve multiple tools (Flowise, vs code assistants) with a selectable and standardized procedure without being forced to load multiple models in memory would be important.

Mayorc1978 avatar Jan 23 '24 21:01 Mayorc1978

would u be able to achieve using CustomChatOpenAI?

where you can specify the model, base url and options: image

HenryHengZJ avatar Jan 25 '24 18:01 HenryHengZJ

Most of the tools that allow you to specify the base url worked great for me, but a few are still giving me problems (flowise included) So if you could fill those fields, it would be useful cause I tested filling BasePath with both http://localhost:1234 or http://localhost:1234/v1 and nothing happened. LM Studio doesn't show any activity. I even tried to set an environment variable with the openai base variable and it didn't work either. So filling those fields would be of great help to show how to do that properly.

Mayorc1978 avatar Jan 25 '24 19:01 Mayorc1978