gptscript icon indicating copy to clipboard operation
gptscript copied to clipboard

Support a local Ollama client

Open jpmcb opened this issue 4 months ago • 2 comments

The Ollama project is an agnostic runtime for LLMs that runs models in containers: https://github.com/ollama/ollama this can be run locally or within a cloud environment.

I'd love to use gptscript, but I have no incentive to use openAI's gpt 3.5 or more expensive gpt 4 (while it's pretty cheap with lite use, over time, this could easily add up to $100s of dollars) when I have my own hardware and GPUs that I could run with codellama, llama 2, Mistral, or any of the other models the ysupport.


Proposal:

  1. Refactor the pkg/openai package to be an agnostic type interface in pkg/client:

https://github.com/gptscript-ai/gptscript/blob/f162c5aa3f7971309d4a4347360fd43fa3e7c497/pkg/openai/client.go#L35-L40

  1. Implement an Ollama client that sets the client.url to some env var provided by the user (in the local environment use case, this would likely be localhost)

I'd be happy to try and take a stab at this one since I have experience building local ollama integrations with neovim: https://github.com/jpmcb/nvim-llama

jpmcb avatar Feb 22 '24 19:02 jpmcb