litellm
litellm copied to clipboard
add function call response parser for non openai models
commit for:https://github.com/BerriAI/litellm/issues/664 this code will help to use non-openai models at autogen by litellm
"""
the function call response of gpt:
"choices": [ { "index": 0, "message": { "role": "assistant", "content": null, "function_call": { "name": "get_current_weather", "arguments": "{\n "location": "San Francisco, CA"\n}" } }, "finish_reason": "function_call" } ],
the response of claude-instant-1
"choices": [ { "finish_reason": "stop_sequence", "index": 0, "message": { "content": "You asked about the weather in San Francisco. Let me check the current weather conditions.", "role": "assistant", "function_call": { "name": "get_current_weather", "arguments": "{"location": "sf", "unit": "fahrenheit"}" } } } ],
the response of claude-2
"choices": [ { "finish_reason": "stop_sequence", "index": 0, "message": { "content": "To get the current weather in San Francisco, I will invoke the get_current_weather function.", "role": "assistant", "function_call": { "name": "get_current_weather", "arguments": "{"location": "sf", "unit": "fahrenheit"}" } } } ],
"""
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| litellm | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Nov 11, 2023 0:33am |
Hey @peterz3g it seems like if the function call isn't correctly outputted, the response would be in response["content"].
This means a user's could would still need to have conditional logic to check where the function call details are present.
Instead of wrapping completion then, why not just expose function_call_prompt to the user as a helper function (similar to encode()).
This would let them choose to use that specific helper in their conditional loop.
Hey @peterz3g it seems like if the function call isn't correctly outputted, the response would be in response["content"].
This means a user's could would still need to have conditional logic to check where the function call details are present.
Instead of wrapping completion then, why not just expose
function_call_promptto the user as a helper function (similar toencode()).This would let them choose to use that specific helper in their conditional loop.
when llm output is not standard function result, maybe it is normal llm output. this case will do nothing, keep the ori result by the code in line 230
@peterz3g yep - since we can't guarantee the format (i.e. it'll always be correctly outputted in the function call) - could we instead have this be a helper function that users can use?
I appreciate your contribution here btw!
@peterz3g any update on this PR ?
@peterz3g any update on this PR ?
just merged the new main branch
I'm super excited about this functionality, but I'm new to litellm. Also, I think I have the same use case as @peterz3g: I want to use a non-OpenAI model with AutoGen and AutoGen would really like an OpenAI compliant interface ... especially for function_call identification. I'm planning on using a model from HuggingFace served on a local instance of text-generation-inference.
I've read through the PR and I have a couple concerns ... Mostly that the both function_call_prompt (logic to inject function definitions into the prompt) and completion_result_wrapper (logic to identify and extract function calls in the content and place it in function_call results) will vary from model to model. The implementation in this PR uses a few-shot approach to teach the model about function calling, where as I plan to use a model that has been fine-tuned for function calling.
@krrishdholakia How would a "helper function that users can use" work when using litellm as an OpenAI compliant proxy? Is there an example script where litellm uses a bunch of helper functions to add new functionality to the proxy and then launches the proxy? So far, I've only checked out the config.yml for the proxy and I was thinking of parameterizing the functions' injection and function_call` extraction configuration similar to how you all do the custom prompt template definitions.
@amihalik that makes sense - i think a helper function here (in utils.py) would make sense, and then we could check for the user opt in, in the config.yaml - maybe under a flag under litellm_settings. If they opt into it, we output parse, and if it fails raise an error.
Would that serve your use-case @amihalik