Mark Backman

Results 240 comments of Mark Backman

Thanks for reporting. We're working on refactoring those demos. In the process @DominicStewart can take a look to see if he can repro this issue.

@matthewj-t6 given that, was it a configuration issue? If so, I'll close out this issue.

I've tested a few different scenarios: ### OpenAI When run_in_parallel=True, I see: - Function call 1 - LLM chat completion 1 - Function call 2 - LLM chat completion 2...

Hi, I'm interested in understanding this issue. Do you have repro steps that you can share?

@aconchillo made changes to improve the reconnection logic of the Cartesia websocket: https://github.com/pipecat-ai/pipecat/pull/796. That should resolve this issue. If you still see this problem, please open a new issue.

@nikcaryo-super can you share repro information for that case? I know we're connected in Slack, but if you don't mind sharing here, that would be really helpful. Last night, I...

@nikcaryo-super great tip! Cartesia is unique in that if there are 5 minutes of inactivity, it will attempt to disconnect the websocket. Using this, I was able to confirm that...

Closing this, as #962 has been closed.

I tested this change out, but I see errors when the disconnect/connect happens. Do you see the same? Also, we need some cleanup in this class, which I'm doing here:...

Serializers aren't FrameProcessors, so they don't have access to Frames. Instead, there's an `on_pipeline_started` event handler that you should be able to use to get access to the same timing...