Mark Backman
Mark Backman
FYI: There is a duplicate transcription that resides in Daily's server-side code. We're working on a fix.
Do you still see this issue?
We're happy to add better support for LiveKit, but we also need some helping hands; ideally, someone with LiveKit expertise. Is anyone available to help out?
I believe the issue is inherent with how LLMs work. The sequence of events is: 1. User asks to turn on lights 2. LLM calls the tool to "turn_on_light" and...
@itsikbelson-spotlight a few things: - PR #1683 hasn't been merged, so it's not available yet, but it solves a different problem - you suggest that the LLM should speak about...
Also, you point to #1683, but I'm not sure it does what you're thinking. It specifically handles the case where two function calls runs in the same LLM turn. The...
> However, since we're in streaming mode, the LLM outputs tokens whenever it's ready. Correct. > The function call itself can be in-between tokens or after the tokens (for instance...
I see. You are adding `Always respond with text explaining what you are going to do before you call function.` to the system prompt to get the LLM to speak...
Superseded by: https://github.com/pipecat-ai/pipecat/pull/1183
@aconchillo please take a look when you have some time.