handleFunctionCall callback result is not passed to the llm. (no function call result pushed)
pipecat version
latest
Python version
3.12
Operating System
macos 15
Issue description
When trying to handle a function call on the client side via handleFunctionCall after doing rtvi_processor.handle_function_call(params=forward_params) on the server side, the result returned inside the handleFunctionCall callback is not returned back to the llm correctly. In the context we can see a function call in progress and the function is called and invoked correctly but the return value is not pushed back.
Reproduction steps
- Setup a simple function call example using
rtvi_processor.handle_function_call(params=forward_params)on the server side - Setup an llmHelper instance and listen to the function call inside of the handleFunctionCall
- return a value inside the handleFunctionCall callback
Expected behavior
The function call result should be correctly returned to the llm
Actual behavior
The function call result is not correctly returned
Logs
-> ONLY THIS IS IN THE CONTEXT -> "type": "function"}]}, {"role": "tool", "content": "IN_PROGRESS", "tool_call_id": "function-call-7458121863988820412"}
I think this might have to do with the FunctionCallResultFrame now inheriting from the SystemFrame instead of the DataFrame? https://github.com/pipecat-ai/pipecat/commit/169b50af61fa87bd645e630ddae38518a823d3c8 @aconchillo
hi! I'm not able to reproduce any issues with the return value. I'd like to make sure I understand your expectations, though. Are you expecting handle_function_call() to return with the results sent by the client? If so, I don't believe this is the expected behavior. To get the results from the client, you need have a FrameProcessor that checks for the the FunctionCallResultFrame and grab the results there. Just be sure to re-push the frame so it gets handled by the llm. Below is an example processor for the typical fetch_weather example we use, but where the weather is provided from the client and the server overrides the condition. I'll probably flesh this out further as an update to one of our examples for future reference.
class WeatherProcessor(FrameProcessor):
"""Processes weather-related function calls.
This processor handles the function call to fetch weather data and
manages the response.
"""
# currently does nothing but tracks waiting calls but could be used
#
waiting_calls: Dict[str, FunctionCallParams] = {}
def __init__(self):
super().__init__()
async def fetch_weather(self, params: FunctionCallParams):
print("Fetching weather data...", params)
await params.llm.push_frame(TTSSpeakFrame("Let me check on that."))
await rtvi.handle_function_call(params)
self.waiting_calls[params.tool_call_id] = params
async def process_frame(self, frame: Frame, direction: FrameDirection):
"""Process incoming frames and handle function calls.
Args:
frame: The incoming frame to process
direction: The direction of frame flow in the pipeline
"""
await super().process_frame(frame, direction)
if isinstance(frame, FunctionCallResultFrame):
print("Function call result:", frame.tool_call_id, frame.result)
if "weather" in frame.result and "condition" in frame.result["weather"]:
frame.result["weather"]["condition"] = "hazy"
if frame.tool_call_id in self.waiting_calls:
del self.waiting_calls[frame.tool_call_id]
await self.push_frame(frame, direction)
hi! I'm not able to reproduce any issues with the return value. I'd like to make sure I understand your expectations, though. Are you expecting
handle_function_call()to return with the results sent by the client? If so, I don't believe this is the expected behavior. To get the results from the client, you need have a FrameProcessor that checks for the theFunctionCallResultFrameand grab the results there. Just be sure to re-push the frame so it gets handled by the llm. Below is an example processor for the typical fetch_weather example we use, but where the weather is provided from the client and the server overrides the condition. I'll probably flesh this out further as an update to one of our examples for future reference.class WeatherProcessor(FrameProcessor): """Processes weather-related function calls.
This processor handles the function call to fetch weather data and manages the response. """ # currently does nothing but tracks waiting calls but could be used # waiting_calls: Dict[str, FunctionCallParams] = {} def __init__(self): super().__init__() async def fetch_weather(self, params: FunctionCallParams): print("Fetching weather data...", params) await params.llm.push_frame(TTSSpeakFrame("Let me check on that.")) await rtvi.handle_function_call(params) self.waiting_calls[params.tool_call_id] = params async def process_frame(self, frame: Frame, direction: FrameDirection): """Process incoming frames and handle function calls. Args: frame: The incoming frame to process direction: The direction of frame flow in the pipeline """ await super().process_frame(frame, direction) if isinstance(frame, FunctionCallResultFrame): print("Function call result:", frame.tool_call_id, frame.result) if "weather" in frame.result and "condition" in frame.result["weather"]: frame.result["weather"]["condition"] = "hazy" if frame.tool_call_id in self.waiting_calls: del self.waiting_calls[frame.tool_call_id] await self.push_frame(frame, direction)
I am expecting the (method) LLMHelper.handleFunctionCall(callback: FunctionCallCallback) to work as it did in the past and as its description indicates If the LLM wants to call a function, RTVI will invoke the callback defined here. Whatever the callback returns will be sent to the LLM as the function result. this is clearly a regression as this worked in the past without interfering with a custom frame processsor?
I see, I think I mis-understood what you were asking about. I thought you were stating that you expected the server-side call: await rtvi.handle_function_call(params) to return whatever results your client had sent as part of its handleFunctionCall callback so you could intercept and do something with those results.
But you are not needing that on the server-side, you are simply seeing that the results sent by the client are not making it to the llm to close the loop? If so, I was not able to reproduce this once the other fix went in. Can you update your pipecat version to 0.0.67, released yesterday and see if this has been resolved?