pezzo icon indicating copy to clipboard operation
pezzo copied to clipboard

Customizing OpenAI Models and Endpoint

Open ImYrS opened this issue 10 months ago • 7 comments

Proposal

  • Able to set model name or model list manually.
  • Able to set OpenAI Base URL manually.

Use-Case

For self-hosted, some people needs to set another OpenAI Endpoint. Like in China, api.openai.com is blocked.

And more, many user use a project called one-api to handle many models from different providers. That project can converts that models to OpenAI-compatible API, so users can use the models not from OpenAI or Claude to test prompt.

In summary, I wish these two features can be developed, and I think it is useful for many users.

Is this a feature you are interested in implementing yourself?

Maybe

ImYrS avatar Apr 16 '24 08:04 ImYrS

Hi ImYrS, I understand the reasoning behind this change. Is this something you'd like to contribute to? This has to do with the proxy service.

arielweinberger avatar Apr 28 '24 13:04 arielweinberger

Hi, glad to hear from you. But I'm sorry I don't have time to contribute at the moment.

My personal understanding is that this doesn't really require a change proxy service, maybe add some ENV vars for API Endpoint or something. And allow custom input model name.

Since I haven't read the code of this project completely, my understanding may be wrong, please point out any problems, thank you very much!

ImYrS avatar Apr 29 '24 02:04 ImYrS

I'd also like to see this. Being able to set a local AI endpoint is very important to me.

PyrokineticDarkElf avatar Aug 14 '24 17:08 PyrokineticDarkElf

Contributions for this are welcome. At this point, I don't have the time to dedicate to building this feature. Sorry.

arielweinberger avatar Aug 14 '24 17:08 arielweinberger

@ImYrS & @PyrokineticDarkElf can you briefly describe your use case of different models and API base URL? I started looking at it and I want to make sure I am able to provide a solution that solves this need.

For the base URL, I think a simple env variable can suffice. For the models we'll need a different solution

ranst91 avatar Aug 16 '24 12:08 ranst91

I think using an env var should work for my needs. My use case is just to use a local LLM server (Ollama, LM Studio etc) Rather than an online provider.

PyrokineticDarkElf avatar Aug 18 '24 11:08 PyrokineticDarkElf

@PyrokineticDarkElf @ImYrS As you can see, there's a PR to address this issue. Please take a look and confirm this addresses your issue before I merge it

ranst91 avatar Aug 19 '24 08:08 ranst91