ChatGPT.nvim
ChatGPT.nvim copied to clipboard
FR: use litellm for easy support of mistral, anthropic, openrouter, ollama, huggingface etc
Hi,
I've been using litellm for a while now, it's a python library that enables using pretty much any API you can want for LLMs (Mistral, openrouter, localai, huggingface, azure, anthropic, etc).
And they support async too!
I think it would be nice to avoid being too reliant on OpenAI vs other providers.
Is that something that could be done ?
I came here to see if Claude support was planned as their latest model is reported to be good for coding. Something like this would be great and it would take the pressure off this project to support more models directly.
I managed to get it to work by just setting the api_host_cmd
.
This was working with Claude Sonnet and litellm is running in a docker container.
require("chatgpt").setup(
{
api_host_cmd = 'echo http://127.0.0.1:4000'
}
)
I allowed it to use the default gpt-3.5-turbo
config in the plugin, here is my litellm proxy_server_config.yaml
model_list:
model_list:
# - model_name: claude-sonnet
- model_name: gpt-3.5-turbo
litellm_params:
model: claude-3-opus-20240229
api_base: https://api.anthropic.com/v1/messages
api_key: "USE_YOUR_API_KEY"
This was just a quick n dirty test to make sure it could work. Next I'll add some more interesting models to the list like local Ollama models and use the config to switch between models. After that's working maybe we could have a way to pass the specific model to use as an option to plugin calls to switch models as we want?
I would love Claude support, are you working on litellm integration?
I would love Claude support, are you working on litellm integration?
If you follow what I did it already works with Claude through litellm.
ogpt.nvim is a derivative plugin with support for other api
For mistral, here's my config.yaml:
model_list:
- model_name: gpt-4-0125-preview
litellm_params:
model: mistral/mistral-large-latest
api_key: REDACTED
litellm_settings:
drop_params: True
Launch the proxy with litellm --config config.yaml --port 5000
Add this to your chatgpt.nvim config: api_host_cmd = "echo http://0.0.0.0:5000",
I seem to be having issues with their docker though
@thiswillbeyourgithub Have you tried using docker compose? That's what worked for me.
I created a simple docker folder with their docker-compose.yaml
and my own proxy_server_config.yaml
and it works great.
I didn't try docker compose and wanted to test their docker run directly. I'll get to it someday thanks