gptscript
gptscript copied to clipboard
Support a local Ollama client
The Ollama project is an agnostic runtime for LLMs that runs models in containers: https://github.com/ollama/ollama this can be run locally or within a cloud environment.
I'd love to use gptscript
, but I have no incentive to use openAI's gpt 3.5 or more expensive gpt 4 (while it's pretty cheap with lite use, over time, this could easily add up to $100s of dollars) when I have my own hardware and GPUs that I could run with codellama, llama 2, Mistral, or any of the other models the ysupport.
Proposal:
- Refactor the
pkg/openai
package to be an agnostic type interface inpkg/client
:
https://github.com/gptscript-ai/gptscript/blob/f162c5aa3f7971309d4a4347360fd43fa3e7c497/pkg/openai/client.go#L35-L40
- Implement an Ollama client that sets the
client.url
to some env var provided by the user (in the local environment use case, this would likely be localhost)
I'd be happy to try and take a stab at this one since I have experience building local ollama integrations with neovim: https://github.com/jpmcb/nvim-llama