compatibility to other OpenAI compatible LLMs
Please make you module compatible to other LLMs such as https://localai.io/ or https://ollama.com/blog/openai-compatibility Those are using the same API and it shouldn't be that difficult to connect them and run them on premises.
@mokkin Hmm this would be interesting to try and add, perhaps an API Endpoint URL text field, for self-hosting OpenAI-compatible models?
Do you have an API endpoint and API key I could do some minimal testing with? I may try setting one of these up on my own as well, when I can find the time.
This module does use an OpenAI library so I'd have to see if the endpoint could be changed based on settings: https://github.com/presswizards/FreeScoutGPT/tree/main/vendor/tectalic/openai
Do you have an API endpoint and API key I could do some minimal testing with?
I just set up one for you for testing 😃
How can I send you the credentials?
@mokkin You can email it to support at presswizards.com 😄
You can run LM Studio locally on a developer machine and turn on the openai-compatible API, any somewhat recent laptop CPU and 1GB spare RAM is enough to run Gemma3 1B or Llama3.2 1B.
@Waltibaba It'd need the public API endpoint, mostly. The rest should be compatible, I'd hope. Happy to try it out if given the public endpoints to use, I'm not familiar with them yet.
http://localhost:1234/v1 is the API endpoint, and there is no API key. As I wrote above, you have to install LM Studio locally on your workstation, easiest would be on the same machine you are running FreeScout on. For docker or other deployments you need to reference the machine running LM Studio, just http://<IP address of LM Studio computer>:1234/v1 .
@Waltibaba I'm looking for a public test endpoint instead of installing it locally, so that I can test it in a more real-world use case. I envision adding support for custom models and endpoints, similar to this that I found in a WP plugin recently:
Hey all, just wondering if there are any updates on this?
No not yet... we are wrapping up some testing on the new Responses API and close to launching that. Once done, we can dig into implementing additional OpenAI compatible models in future versions.
Would be nice if this gets supported. I think this is a interesting project and a place where LLMs could really help - but support conversations can sometimes contain sensitive things - and potentially leaking them to a centralized LLM is not something that I feel comfortable with - so I cannot use it until it is possible to use it with a self hosted LLM.