Mark Backman
Mark Backman
Interesting. I've tested, probably hundreds of sessions with that same configuration and have never seen this issue. Are you able to routinely reproduce the issue?
This is an issue related to ElevenLabs. They're requesting more information about the generation to troubleshoot the issue on their end.
@muranski good to hear! We'll be releasing with this improvement soon.
@laurentwizecamel can you try again with 0.0.68. This PR was made by an eng from the 11Labs to fix the issue you're facing: https://github.com/pipecat-ai/pipecat/pull/1790
> [@markbackman](https://github.com/markbackman) [@muranski](https://github.com/muranski) I upgraded to 0.0.68 with the hopes of having this issue resolved but I still see it. Any word from the folks at ElevenLabs? Which specific 1008...
I haven't used what @Ahmer967 has suggested. > Is something like this on the roadmap @markbackman ? Would be happy to contribute here with a PR :) We don't have...
Somehow I totally missed this thread. Apologies! One question is why a standalone RAG service vs having the LLM make a tool call with a RAG query? The benefits of...
The ElevenLabs team acknowledged the issue and are working on it.
This is ElevenLabs. ElevenLabs outputs word/timestamp pairs, which we use to determine what the bot says, down to the word level. The TTS service outputs these as `TTSTextFrame`s.
> Is there a corresponding issue on the elevenlabs side we can monitor? Good question. I just asked the 11Labs team. I'll post when I hear back.