custom_llm_provider is not working with acompletion, but is working with completion
Addresses the bug described here: https://github.com/BerriAI/litellm/issues/3480
I am not sure I understand the logic there and why 'completion' and 'acompletion' have a different treatment for the custom_llm_provider parameter, but this solves this specific issue
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| litellm | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | May 6, 2024 4:17pm |
Hi @mrT23 thanks for the PR - please can you add a test for this scenario
I am not sure I have the technical understanding in litellm to do that. setting up a real vllm server for testing is hard, and mocking can also be complicated if you don't know every last detail.
I am not sure I have the technical understanding in litellm to do that.
I believe editing one of our existing tests would work :https://github.com/BerriAI/litellm/blob/ec63a300957a88c0c82f65ef409ec8a4cde556c6/litellm/tests/test_completion.py#L788
If you pass custom_llm_provider=anthropic there ^ it would catch this scenario right @mrT23 ?