FreeScoutGPT icon indicating copy to clipboard operation
FreeScoutGPT copied to clipboard

compatibility to other OpenAI compatible LLMs

Open mokkin opened this issue 1 year ago • 10 comments

Please make you module compatible to other LLMs such as https://localai.io/ or https://ollama.com/blog/openai-compatibility Those are using the same API and it shouldn't be that difficult to connect them and run them on premises.

mokkin avatar Mar 03 '25 15:03 mokkin

@mokkin Hmm this would be interesting to try and add, perhaps an API Endpoint URL text field, for self-hosting OpenAI-compatible models?

Do you have an API endpoint and API key I could do some minimal testing with? I may try setting one of these up on my own as well, when I can find the time.

This module does use an OpenAI library so I'd have to see if the endpoint could be changed based on settings: https://github.com/presswizards/FreeScoutGPT/tree/main/vendor/tectalic/openai

presswizards avatar Mar 03 '25 22:03 presswizards

Do you have an API endpoint and API key I could do some minimal testing with?

I just set up one for you for testing 😃

How can I send you the credentials?

mokkin avatar Mar 04 '25 08:03 mokkin

@mokkin You can email it to support at presswizards.com 😄

presswizards avatar Mar 04 '25 21:03 presswizards

You can run LM Studio locally on a developer machine and turn on the openai-compatible API, any somewhat recent laptop CPU and 1GB spare RAM is enough to run Gemma3 1B or Llama3.2 1B.

Waltibaba avatar Apr 01 '25 07:04 Waltibaba

@Waltibaba It'd need the public API endpoint, mostly. The rest should be compatible, I'd hope. Happy to try it out if given the public endpoints to use, I'm not familiar with them yet.

presswizards avatar Apr 01 '25 08:04 presswizards

http://localhost:1234/v1 is the API endpoint, and there is no API key. As I wrote above, you have to install LM Studio locally on your workstation, easiest would be on the same machine you are running FreeScout on. For docker or other deployments you need to reference the machine running LM Studio, just http://<IP address of LM Studio computer>:1234/v1 .

Waltibaba avatar Apr 14 '25 09:04 Waltibaba

@Waltibaba I'm looking for a public test endpoint instead of installing it locally, so that I can test it in a more real-world use case. I envision adding support for custom models and endpoints, similar to this that I found in a WP plugin recently:

Image

presswizards avatar Apr 14 '25 20:04 presswizards

Hey all, just wondering if there are any updates on this?

slatifi avatar May 07 '25 20:05 slatifi

No not yet... we are wrapping up some testing on the new Responses API and close to launching that. Once done, we can dig into implementing additional OpenAI compatible models in future versions.

presswizards avatar May 07 '25 21:05 presswizards

Would be nice if this gets supported. I think this is a interesting project and a place where LLMs could really help - but support conversations can sometimes contain sensitive things - and potentially leaking them to a centralized LLM is not something that I feel comfortable with - so I cannot use it until it is possible to use it with a self hosted LLM.

ligi avatar Sep 02 '25 10:09 ligi