aider icon indicating copy to clipboard operation
aider copied to clipboard

How to set litellm config?

Open NextGenOP opened this issue 1 year ago • 1 comments

Issue

I have litellm config that i usually use, how to load them with aider?

Version and model info

Aider v0.45.1

NextGenOP avatar Jul 27 '24 22:07 NextGenOP

Thanks for trying aider and filing this issue.

I'm not sure what you mean by "load litellm config"?

This doc may be helpful:

https://aider.chat/docs/llms.html

paul-gauthier avatar Jul 28 '24 10:07 paul-gauthier

I'm going to close this issue for now, but feel free to add a comment here and I will re-open or file a new issue any time.

paul-gauthier avatar Jul 30 '24 16:07 paul-gauthier

I would like to reopen this question and expand it slightly. Aider is using litellm for its models and by this I suppose it is using litellm python SDK. There is another option for running litellm and that's litellm Proxy Server. It exposes various models on e.g. localhost:4000 (default) and the models can be configured via config.yaml. So the question (or feature request) is for aider to allow adding litellm server as an option for configuring and choosing the model to be used. https://docs.litellm.ai/docs/simple_proxy

Edit: I have tried using OPENAI_LIKE_API... for this use case but it failed. Fortunately using OPENAI_API... it works as expected. So my feature request would be just adding separate keys for litellm proxy, e.g.: LITELLM_API... (the same api as openai) so that we could call models like litellm/anthropic/claude-3-5-sonnet instead of confusing: openai/anthropic/claude-3-5-sonnet

gregid avatar Nov 03 '24 17:11 gregid

adding separate keys for litellm proxy

Why not put the key on the config file?

NextGenOP avatar Nov 03 '24 22:11 NextGenOP

Unless I completely misunderstand your proposal I don't see a placement of the key as being an issue here. If we are using openai config we still end up with confusing openai/anthropic/claude-3-5-sonnet. Your suggestion will only hide this "abomination" if we work in a set once and forget kind of manner.

gregid avatar Nov 03 '24 22:11 gregid

Emm. Isn't this the litellm config? We may need to pass litellm into the aider model selection. But the api can be placed into config. With this you can have lot of different provider under single api. https://docs.litellm.ai/docs/proxy/configs

NextGenOP avatar Nov 03 '24 23:11 NextGenOP

Emm. Isn't this the litellm config? We may need to pass litellm into the aider model selection. But the api can be placed into config. With this you can have lot of different provider under single api. https://docs.litellm.ai/docs/proxy/configs

Edit:

LITELLM_API

Or maybe I misunderstood your proposal, is that litellm master key?

NextGenOP avatar Nov 03 '24 23:11 NextGenOP

LITELLM_API

Or maybe I misunderstood your proposal, is that litellm master key?

Exactly, I suggest adding LITELLM_API_BASE and LITELLM_API_KEY so that we can then call our models like: litellm/anthropic/claude-3-5-sonnet instead of: openai/anthropic/claude-3-5-sonnet and leaving openai key for other purposes

gregid avatar Nov 03 '24 23:11 gregid

I see.

NextGenOP avatar Nov 03 '24 23:11 NextGenOP

I'd love to be able to use an external instance of LiteLLM over the OpenAI-compatible API as well. I have LiteLLM setup on my server so I route all my LLM requests through it.

I tried setting it up with the OpenAI integration and changing the base URL, but aider attempts to use the OPENROUTER_API_KEY environment variable if I try to use a model such as openrouter/anthropic/claude-3.5-sonnet through my own LiteLLM instance.

Perhaps we could have a separate provider for LiteLLM which reads LITELLM_API_KEY and, if that provider is setup, all model requests go to LiteLLM?

If folks are ok with this approach I'd be happy to put up a PR.

luizribeiro avatar Jan 28 '25 21:01 luizribeiro

Perhaps we could have a separate provider for LiteLLM which reads LITELLM_API_KEY and, if that provider is setup, all model requests go to LiteLLM?

I guess this method is easier than loading litellm config.

NextGenOP avatar Jan 28 '25 22:01 NextGenOP

Ok, I tried building this as I mentioned above and I'm running into issues because the litellm Python SDK still tries to route to the right provider for the model name. This is similar to the issue described on https://github.com/run-llama/llama_index/issues/13814 - which they seem to have decided that the right approach was to use the OpenAI API to talk to the LiteLLM Proxy.

I think there's two approaches we can take here if we want to allow aider to talk to a LiteLLM Proxy:

  1. Just force to use the OpenAI API directly when using the new LiteLLM Proxy provider (comes with the downside of pulling the openai dependency to aider)
  2. Put up a PR to the litellm Python SDK to allow for forcing a given provider, which seems less than ideal since it's a bit of bespoke logic being introduced to the framework just for this...

luizribeiro avatar Jan 29 '25 01:01 luizribeiro

  • Just force the use of the OpenAI API directly when using the new LiteLLM Proxy provider (this comes with the downside of adding the openai dependency to aider).

My first thought is that Aider is using the LiteLLM Proxy and utilizes the OpenAI SDK to call the model API. I didn't know about LiteLLM SDK back then.

Are you using this? litellm.api_base

or this? LiteLLM Proxy (LLM Gateway)

NextGenOP avatar Jan 29 '25 09:01 NextGenOP

Turns out this is the key

  1. I run litellm --config litellm.yml
  2. I add Required Variables to my ENV
  3. I run aider --model litellm_proxy/one-model-from-litellm.yml

Is it working for you?
@luizribeiro @gregid

NextGenOP avatar Jan 29 '25 10:01 NextGenOP

Turns out this is the key

  1. I run litellm --config litellm.yml
  2. I add Required Variables to my ENV
  3. I run aider --model litellm_proxy/one-model-from-litellm.yml

Is it working for you? @luizribeiro @gregid

Just to confirm this is the right solution to the problem - no need to make any changes to aider... maybe just adding this to documentation

gregid avatar Feb 17 '25 21:02 gregid

Great! I'm closing this.

NextGenOP avatar Feb 19 '25 09:02 NextGenOP