Mark Backman

Results 240 comments of Mark Backman

@mbarinov, sorry, which issue? There are two mentioned here. Any further info you can share? If it's this error: ``` The server had an error while processing your request. Sorry...

We're improving error handling here, making it more consistent: https://github.com/pipecat-ai/pipecat/pull/3084 I'm closing out this issue since it's out of scope for Pipecat, AFAIK. If there is a repro case in...

FYI: coming here: https://github.com/pipecat-ai/pipecat/pull/1753. Should be released today.

Apologies for letting this go for so long. Azure is now a subclass of OpenAILLMService, so these issues have been resolved. I'm going to close this contribution out. Thanks again...

I see the `Idle pipeline detected` log message, which comes from the [Idle Pipeline Detection](https://docs.pipecat.ai/server/pipeline/pipeline-idle-detection) logic, not the UserIdleProcessor (which is a processor in the pipeline that detects no user...

The first thing I would do would be to add an Observer to your PipelineTask, which monitors `LLMTextFrame`, `LLMFullResponseStartFrame`, and `LLMFullResponseEndFrame`. The `LLMLogObserver()` is probably the easiest way to do...

LLMTextFrame is actually `LLM GENERATING`. Though, in looking at LLMLogObserver, it doesn't show the flow we need. Instead, let's use the `DebugLogObserver`. You can add this to your PipelineTask: ```...

Can you share a little bit more about your code? Specifically, how you set up services, how you set up the context, and how you set up tools? Also, cc...

Sorry for the delay. The one key difference is that the Twilio Chatbot example configures Cartesia as: ``` tts = CartesiaTTSService( api_key=os.getenv("CARTESIA_API_KEY"), voice_id="e13cae5c-ec59-4f71-b0a6-266df3c9bb8e", # Madame Mischief push_silence_after_stop=True, ) ``` I'm...