opengpts
opengpts copied to clipboard
requesting to add ollama.
we need prompting strategies that work with oss models reliably first
Yes, I'll be waiting for that feature, imagine GPTs running locally and doing things in background while you are working on your things.
@thesanju, Ollama is now supported out-of-the-box. I just tested it with the latest code and it works as expected. Please give it a try.
If you're running Ollama from your local machine (http://localhost:11434), you'll need to ensure the backend Docker service has access to the Ollama API. Because localhost will reference the container itself, you'll need to use the special DNS name host.docker.internal to refer to your host machine. You can specify the environment variable OLLAMA_BASE_URL=http://host.docker.internal:11434
so that the backend service will point to the Ollama API running on the host machine.