litellm icon indicating copy to clipboard operation
litellm copied to clipboard

[Bug]: custom_llm_provider is not working with acompletion, but is working with completion

Open mrT23 opened this issue 1 year ago • 0 comments

What happened?

When I give the completion endpoint a parameter: kwargs["custom_llm_provider"] ='openai' it uses it as expected (I am working with vllm) However, acompletion ignores this parameter, and crashes.

https://litellm.vercel.app/docs/providers/vllm#calling-hosted-vllm-server

I looked a bit at the code. The core logic of handling custom_llm_provider is different between completion and acompletion for some reason.

acompletion indeed ignores this parameter, and immediately calls instead get_llm_provider, which raises an exception:

https://github.com/BerriAI/litellm/blob/ec63a300957a88c0c82f65ef409ec8a4cde556c6/litellm/main.py#L283

When I manually edited the logic, and forced acompletion to use the correct custom_llm_provider, I was able to do inference with acompletion on vllm

Relevant log output

[openai] # for vllm api_base="http://127.0.0.1:2000/v1" custom_llm_provider="openai"

->

litellm.exceptions.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=... Pass model as E.g. For 'Huggingface' inference endpoints pass in completion(model='huggingface/starcoder',..) Learn more: https://docs.litellm.ai/docs/providers

Additional feedback

I personally much prefer to add an extra parameter, custom_llm_provider, rather than the prefix 'openai' to the model. its confusing to add this prefix, which is not the real name of the vllm deployment model. I also want to obfuscate the name on deployment, and not have it clearly say 'openai', which is confusing, and wrong.

mrT23 avatar May 06 '24 16:05 mrT23