litellm
litellm copied to clipboard
[Feature]: add support for mocking tool/function completion response
The Feature
Add ability to mock a completion response when we are making use of tools/function, much like the simpler message mock response.
e.g.
from litellm import completion
from openai.types.chat import ChatCompletionMessageToolCall
model = "gpt-3.5-turbo"
messages = [{"role":"user", "content":"This is a test request"}]
mocked_resp = completion(
model=model,
messages=messages,
mock_tool_calls_response=ChatCompletionMessageToolCall(arguments=...)
and maybe with a nicer way to init the ChatCompletionMessageToolCall such that we don't have to be importing openai bits directly?
Motivation, pitch
The mock feature is great, but is not usable if we are making use of tools / functions as the response object would not correspond to the expected response format.
Open to adding this feature if it seems like a reasonable first contribution on here 👍
Twitter / LinkedIn details
https://www.linkedin.com/in/jonasdebeuk/
This is a great idea! @jonasdebeukelaer
Would welcome a contribution here.
Curious - how do you use the mock feature today?
+1 this would have been super helpful - @jonasdebeukelaer are you planning on working on this ? would love it
@krrishdholakia currently not using the mock feature it as I'm only making use of tools in my project, but would be used simply for functional tests really.
I don't immediately have capacity to work on this, so would be a couple weeks away if I do do it, sorry!