ChatGPT.nvim icon indicating copy to clipboard operation
ChatGPT.nvim copied to clipboard

FR: use litellm for easy support of mistral, anthropic, openrouter, ollama, huggingface etc

Open thiswillbeyourgithub opened this issue 11 months ago • 8 comments

Hi,

I've been using litellm for a while now, it's a python library that enables using pretty much any API you can want for LLMs (Mistral, openrouter, localai, huggingface, azure, anthropic, etc).

And they support async too!

I think it would be nice to avoid being too reliant on OpenAI vs other providers.

Is that something that could be done ?

thiswillbeyourgithub avatar Mar 05 '24 08:03 thiswillbeyourgithub

I came here to see if Claude support was planned as their latest model is reported to be good for coding. Something like this would be great and it would take the pressure off this project to support more models directly.

TC72 avatar Mar 06 '24 08:03 TC72

I managed to get it to work by just setting the api_host_cmd. This was working with Claude Sonnet and litellm is running in a docker container.

require("chatgpt").setup(
    {
        api_host_cmd = 'echo http://127.0.0.1:4000'
    }
)

I allowed it to use the default gpt-3.5-turbo config in the plugin, here is my litellm proxy_server_config.yaml model_list:

model_list:
#  - model_name: claude-sonnet
  - model_name: gpt-3.5-turbo
    litellm_params:
      model: claude-3-opus-20240229
      api_base: https://api.anthropic.com/v1/messages
      api_key: "USE_YOUR_API_KEY"

CleanShot 2024-03-12 at 21 02 40

This was just a quick n dirty test to make sure it could work. Next I'll add some more interesting models to the list like local Ollama models and use the config to switch between models. After that's working maybe we could have a way to pass the specific model to use as an option to plugin calls to switch models as we want?

TC72 avatar Mar 12 '24 21:03 TC72

I would love Claude support, are you working on litellm integration?

Gogotchuri avatar Mar 24 '24 14:03 Gogotchuri

I would love Claude support, are you working on litellm integration?

If you follow what I did it already works with Claude through litellm.

TC72 avatar Mar 26 '24 21:03 TC72

ogpt.nvim is a derivative plugin with support for other api

Aman9das avatar Apr 02 '24 04:04 Aman9das

For mistral, here's my config.yaml:

model_list:
  - model_name: gpt-4-0125-preview
    litellm_params:
      model: mistral/mistral-large-latest
      api_key: REDACTED

litellm_settings:
  drop_params: True

Launch the proxy with litellm --config config.yaml --port 5000

Add this to your chatgpt.nvim config: api_host_cmd = "echo http://0.0.0.0:5000",

I seem to be having issues with their docker though

thiswillbeyourgithub avatar Apr 07 '24 11:04 thiswillbeyourgithub

@thiswillbeyourgithub Have you tried using docker compose? That's what worked for me. I created a simple docker folder with their docker-compose.yaml and my own proxy_server_config.yaml and it works great.

TC72 avatar Apr 07 '24 11:04 TC72

I didn't try docker compose and wanted to test their docker run directly. I'll get to it someday thanks

thiswillbeyourgithub avatar Apr 07 '24 12:04 thiswillbeyourgithub