litellm
litellm copied to clipboard
Fixing Open AI tool calling result syntax for Ollama models
See https://github.com/BerriAI/litellm/issues/2209#issuecomment-2007993125
The latest updates on your projects. Learn more about Vercel for Git ↗︎
Name | Status | Preview | Comments | Updated (UTC) |
---|---|---|---|---|
litellm | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Mar 20, 2024 2:23pm |
Hitting issues related to the same problem with arguments response being the incorrect format for Ollama models wrapped in the proxy and attempting function calling. Would be great to see this released!
Another hand up for this - I'm trying to utilise LiteLLM+Ollama function calling with AutoGen.
I noticed that with one function it appears to populate the top-level name correctly, but when I have two functions the response doesn't populate the top-level name value. Not sure if it's related to this specifically.
When one function available, response:
'{\'arguments\': \'{"name": "currency_calculator", "arguments": {"base_amount": 123.45, "base_currency": "EUR", "quote_currency": "USD"}}\', \'name\': \'currency_calculator\'}'
And when two functions are available, response:
'{\'arguments\': \'{"name": "currency_calculator", "arguments": {"base_amount": 123.45, "base_currency": "EUR", "quote_currency": "USD"}}\', \'name\': \'\'}'
What are your plans to make OpenAI tool calling bug-free @krrishdholakia ? Could this PR help somehow?
Thx!