Message not getting added to context when function call and message are produced on same turn.
Same behavior as https://github.com/pipecat-ai/pipecat/pull/1250 has returned.
When an LLM takes a turn, and it produces both a function call and a message, the message is tossed and not added to context, causing it to repeat itself when it takes the next turn when given specific instructions (because it hasn't realized it already said something).
This returned in 0.0.59, we've had to roll back to 0.0.58, so everything up through 0.0.61 is not usable for us ATM because we often have agents speaking and calling functions on the same turn.
Hi @danthegoodman1 ,
did you observe this problem by any chance if you set allow_interruptions to True aswell?
If I do so, I get:
LLM: Generating chat [[{"role": "system", "content": "...", {"role": "user", "content": "..."}, {"role": "user", "content": "..."}, ...]]
when using OpenAILLMContext in any case, even if the system clearly produced valid output for the TTS component - the assistant messages simply don't get appended which actually destroys the business logic.
There is no direct connection to function calling here – at least I didn't notice any – but the principle and the consequence are the same.
I am using version 0.0.63.
Best Julian
Yes, when we tested 0.0.62 we saw that behavior.
Any updates here? This is completely breaking the business logic for us. We want the response to be in context when interruption occurs and there doesn't seem to be an easy way to achieve this. Thanks, any help here would be greatly appreciated!
About a month ago, @aconchillo spent a considerable amount of time looking into this issue and wasn't able to repro it. The most helpful thing would be to provide a single file example that can be used to reproduce the issue. @smallestpritish if you have this, can you please share?