pipecat icon indicating copy to clipboard operation
pipecat copied to clipboard

Flow gets stuck and idles until the user says "are you still there"

Open abrar360 opened this issue 8 months ago • 13 comments

I'm noticing that there are several times when the agent should typically respond or make a function call after saying something, but it just generates the TTS and then freezes until the user asks "are you still there"/

Here are the debug messages printed right before it freezes:

Running version 0.0.61.

Image

abrar360 avatar Apr 16 '25 19:04 abrar360

Can you provide more context? Also:

  • How frequently does this happen?
  • Can you provide a broader look at the logs?
  • What does your pipeline look like?
  • Can you repro it on https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/07-interruptible.py?

markbackman avatar Apr 16 '25 22:04 markbackman

  1. It happens about a 25% of the time, occurring inconsistently and unpredictably. I'm having trouble nailing down how to reproduce it. Which makes me wonder if it is dependent on the following non-deterministic factors:
  • External API response timing - (tried queueing hardcoded TTSSay whenever API call is made to help remedy this)
  • STT/LLM/TTS API response timing
  • Nature of LLM response/function calling ("stopped" reason "end_turn")
  1. The logs are very long and contain some sensitive info, I have it dumped to a file and share those with you, but it seems to be consistent with the pattern in the snapshot I shared above. (Hangs after "Bot stopped speaking"). In some cases this is because the LLM should be making a function call after speaking but it fails to make the necessary call. These could be fixed by prompting hopefully.

  2. Using pipecat flows, the general pipeline is:

  • Greeting node: jot down customer intent in a note to self (using tool). transition handler recites canned TTSSay "Sure thing, I'd be happy to help!" and transitions to help_user node.
  • Help user node (General node for helping the user): Based on what was said in the greeting node, will either answer the questions about availability/FAQs or if the intent fits into a set of categories: [BOOK, CANCEL, RESCHEDULE] will select that using a tool. That tool will then route the user through the appropriate sequence of nodes for the respective selected category.
  1. I'm having trouble even reproducing it in my own code base right now but I can try to see if I can reproduce it with that example.

abrar360 avatar Apr 17 '25 17:04 abrar360

Ok, that's helpful. Flows relies on function calling to make transitions between nodes. Previously (versions earlier than 0.0.60), function calls were susceptible to being interrupted, which would cause them to never complete. In newer versions, function calls now run as a task, which can be interrupted. OpenAI is generally good at handling this, but Anthropic and Gemini are not as reliable.

My hunch is that your function calls are getting interrupted, which results in unexpected behavior. I would recommend adding an STTMuteFilter to your pipeline to help prevent interruptions from disrupting function calls. You can set the STT strategy to FUNCTION_CALL, which will prevent user audio from interrupting during a function call. This will ensure that critical operations complete without user interruption, while still allowing the user to interrupt the bot during normal speech.

Check out how to use the STTMuteFilter:

  • Docs: https://docs.pipecat.ai/server/utilities/filters/stt-mute#sttmutefilter
  • Demo: https://github.com/pipecat-ai/pipecat/blob/main/examples/foundational/24-stt-mute-filter.py

markbackman avatar Apr 18 '25 12:04 markbackman

Thanks for your suggestion. I'm already using STTMuteFilter with the FUNCTION_CALL strategy. I suspect that there's something besides user interruptions that might be causing the issue. I will just have to add additional logging to get a better idea of what's going on.

I just enabled performance metrics to try and see if an external API might be the culprit.

Do you know if there's any built in functionality for clean log storage that will organize logs for each conversation? Would the new mem0 integration be good for this?

abrar360 avatar Apr 19 '25 16:04 abrar360

I just saw that the new v0.0.63 release mentions a fix: "Fixed an issue where LLMAssistantContextAggregator would prevent a BotStoppedSpeakingFrame from moving through the pipeline."

I'm suspecting that this might be the exact issue that I might be facing since my issue involves it getting stuck right after the logs say "Bot stopped speaking".

abrar360 avatar Apr 19 '25 16:04 abrar360

@abrar360 did upgrade to 0.0.63 fix this? The same issue I am facing with Open AI too, sometime the bot stucks after function calling and we have to interrupt the bot in order to proceed further during the call. FYI, I am already on pipecat version 0.0.63.

The behaviour is quite inconsistent.

rahultyl avatar Apr 19 '25 18:04 rahultyl

@rahultyl I haven't upgraded yet. I'm running 0.0.61 and I tried to upgrade to 0.0.62 some time ago when it released and some of the changes were breaking other things in my flow. Most likely I will dig into how the fix works in 0.0.63 and just try to make those changes to my local copy of 0.0.61 and see if that fixes it. I'll keep you posted and maybe share my patched version if it ends up working.

abrar360 avatar Apr 19 '25 19:04 abrar360

@markbackman I'm noticing that you made the commit which fixes the BotStoppedSpeakingFrame issue in v0.0.63. But I see that you haven't mentioned it here, which leads me to wonder if maybe you have reason to believe that it is unlikely that this issue is the culprit in my case. Could you weigh in?

Image

abrar360 avatar Apr 21 '25 20:04 abrar360

@rahultyl So I still want to see what Mark says, but it looks like it's just a one line change if you want to try it out and see if it helps for you: https://github.com/pipecat-ai/pipecat/commit/b85bd91d084c9cd2aff988a911fe2cf078b1a25b

abrar360 avatar Apr 21 '25 20:04 abrar360

b85bd91

So this change is the culprit you are saying? Reverting it can solve the issue? Did you verify it @abrar360 ?

rahultyl avatar Apr 24 '25 05:04 rahultyl

Sorry for the delay. Here's the PR in question: https://github.com/pipecat-ai/pipecat/pull/1508.

I added this change because the assistant context aggregator was blocking user BotStoppedSpeakingFrames. This is unlikely to have an impact for you because the assistant context aggregator is almost always at the end of the pipeline. The specific case I was handling had to do with the Gemini Multimodal Live API.

This likely won't help because pipelines usually look something like this:

    pipeline = Pipeline(
        [
            transport.input(),
            stt,
            context_aggregator.user(),
            llm,
            tts,
            transport.output(),
            context_aggregator.assistant(),  # Assistant context aggregator
        ]
    )

That is, the assistant context aggregator is normally the last processor in the pipeline, so this was a latent issue not affecting many.


As for the issue, are you getting stuck in any particular part of the conversation? For example, is it when the LLM is running a function call? If so, the nominal flow should look something like this:

2025-04-24 10:51:02.151 | DEBUG    | pipecat.services.llm_service:_run_function_call:172 - OpenAILLMService#0 Calling function [collect_party_size:call_v4CYCYacaPFqOenjgC4jmNNN] with arguments {'size': 2}
2025-04-24 10:51:02.151 | DEBUG    | pipecat.processors.aggregators.llm_response:_handle_function_call_in_progress:475 - OpenAIAssistantContextAggregator#0 FunctionCallInProgressFrame: [collect_party_size:call_v4CYCYacaPFqOenjgC4jmNNN]
2025-04-24 10:51:02.151 | DEBUG    | pipecat_flows.manager:transition_func:356 - Function call pending: collect_party_size (total: 1)
2025-04-24 10:51:02.152 | DEBUG    | pipecat_flows.manager:transition_func:363 - Handler completed for collect_party_size
2025-04-24 10:51:02.152 | DEBUG    | pipecat.processors.aggregators.llm_response:_handle_function_call_result:482 - OpenAIAssistantContextAggregator#0 FunctionCallResultFrame: [collect_party_size:call_v4CYCYacaPFqOenjgC4jmNNN]
2025-04-24 10:51:02.152 | DEBUG    | pipecat_flows.manager:decrease_pending_function_calls:298 - Function call completed: collect_party_size (remaining: 0)
2025-04-24 10:51:02.152 | DEBUG    | pipecat_flows.manager:on_context_updated_edge:315 - Dynamic transition for: collect_party_size

That is:

  • Run the function call
  • FunctionCallInProgressFrame emitted
  • Flows tracks the pending function call; (total: 1) in this example
  • Function call handlers are run
  • FunctionCallResultFrame emitted with the result
  • Flows updates pending function call; (remaining: 0) in this example, which is a sign transition (via transition_callback)
  • Transition starts

Make sure you see this type of process happening.

Also, more logging info would be helpful.

markbackman avatar Apr 24 '25 14:04 markbackman

Also, generally, common causes for the bot not responding are that the VAD picked up a human-sounding input but the STT service couldn't transcribe it. You could also look for instances where you see:

2025-04-24 10:51:00.240 | DEBUG    | pipecat.transports.base_input:_handle_user_interruption:154 - User started speaking
2025-04-24 10:51:01.439 | DEBUG    | pipecat.transports.base_input:_handle_user_interruption:164 - User stopped speaking

without a corresponding transcription.

Common causes are:

  • noisy environment
  • VADParams, specifically start_secs, that are too aggressive

I'd recommend using the default start_secs value and utilizing noise cancellation, like Krisp, to prevent these types of unwanted interruptions.

markbackman avatar Apr 24 '25 14:04 markbackman

@abrar360 are you still having issues?

markbackman avatar May 06 '25 16:05 markbackman

In this next Pipecat release, 0.0.72, which should be out still this week, we have included several improvements to help diagnose and prevent potential Pipeline freezes, including:

  • Added logging and improved error handling.
  • Introduced task watchdog timers. Watchdog timers are used to detect if a Pipecat task is taking longer than expected (default is 5 seconds). Watchdog timers are disabled by default and can be enabled globally by passing the enable_watchdog_timers argument to the PipelineTask constructor.
  • Fixed an event loop blocking issue when using SentryMetrics.
  • Fixed an issue where the UserStoppedSpeakingFrame was not received if the transport was not receiving new audio frames.
  • Logging an edge case where the user interrupted the bot but no new aggregation was received.

In our tests after these changes, we have not been able to reproduce the Pipeline freezes anymore, but we know that this is not a guarantee that they are completely fixed.

So, it is important to update to this latest version once it is released, and if any freezes happen again, enable the watchdog to try to understand what and where it is happening.

I am closing this issue, but feel free to open a new one using the latest version in case you are still able to reproduce it.

filipi87 avatar Jun 26 '25 11:06 filipi87

I'm facing a similar issue,

2025-10-28T16:10:12.043+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: UserStoppedSpeakingFrame#55 2025-10-28T16:10:12.048+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: LLMFullResponseStartFrame#16 2025-10-28T16:10:13.241+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - OpenAILLMService#1 TTFB: 1.2063302993774414 2025-10-28T16:10:13.242+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#149 2025-10-28T16:10:13.400+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [Thank you for your question!] 2025-10-28T16:10:13.401+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 28 2025-10-28T16:10:13.401+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.0017969608306884766 2025-10-28T16:10:13.402+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#150 2025-10-28T16:10:13.404+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#151 2025-10-28T16:10:13.406+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSStartedFrame#14 2025-10-28T16:10:13.604+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 TTFB: 0.20413732528686523 2025-10-28T16:10:13.606+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#152 2025-10-28T16:10:13.608+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#464(pts: 0:00:23.017640, text: [Thank]) 2025-10-28T16:10:13.610+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#465(pts: 0:00:23.203400, text: [you]) 2025-10-28T16:10:13.614+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - Bot started speaking 2025-10-28T16:10:13.616+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - < TranscriptLogger: BotStartedSpeakingFrame#29 2025-10-28T16:10:13.621+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - < STT Logger: BotStartedSpeakingFrame#29 2025-10-28T16:10:13.648+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#466(pts: 0:00:23.296280, text: [for]) 2025-10-28T16:10:13.699+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#467(pts: 0:00:23.389159, text: [your]) 2025-10-28T16:10:13.758+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [ Just to clarify, our practice is ABCD with Doctor xxxx.] 2025-10-28T16:10:13.758+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 114 2025-10-28T16:10:13.758+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.0009477138519287109 2025-10-28T16:10:13.759+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#153 2025-10-28T16:10:13.761+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#154 2025-10-28T16:10:13.877+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#468(pts: 0:00:23.528479, text: [question!]) 2025-10-28T16:10:18.460+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - OpenAILLMService#1 prompt tokens: 13641, completion tokens: 168 2025-10-28T16:10:18.469+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - OpenAILLMService#1 processing time: 6.434534311294556 2025-10-28T16:10:18.496+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#469(pts: 0:00:24.503717, text: [Just]) 2025-10-28T16:10:18.499+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: TTSTextFrame#470(pts: 0:00:24.701087, text: [to]) 2025-10-28T16:10:18.535+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#157 2025-10-28T16:10:18.557+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#158 2025-10-28T16:10:18.575+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [For dental savings plans and insurance networks, we’re in-network with most major PPO plans] 2025-10-28T16:10:18.576+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 180 2025-10-28T16:10:18.576+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.0020987987518310547 2025-10-28T16:10:18.577+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [ We also help many folks with other plans, but not all savings plans are considered insurance, and network status can vary.] 2025-10-28T16:10:18.578+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 123 2025-10-28T16:10:18.578+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.001150369644165039 2025-10-28T16:10:18.579+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [If you have a specific dental savings plan in mind, could you share the name of the plan?] 2025-10-28T16:10:18.579+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 89 2025-10-28T16:10:18.579+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.0008709430694580078 2025-10-28T16:10:18.580+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [ This way, I can have our team verify if we’re able to accept it and get back to you with the details.] 2025-10-28T16:10:18.580+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 102 2025-10-28T16:10:18.580+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.0006690025329589844 2025-10-28T16:10:18.581+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7: Generating TTS [ Would you like to provide that, or would you like me to take a message for our team to follow up?] 2025-10-28T16:10:18.581+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 usage characters: 98 2025-10-28T16:10:18.581+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - CartesiaTTSService#7 processing time: 0.0006482601165771484 2025-10-28T16:10:18.585+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#159 2025-10-28T16:10:18.587+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#160 2025-10-28T16:10:18.589+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#161 2025-10-28T16:10:18.591+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#162 2025-10-28T16:10:18.592+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#163 2025-10-28T16:10:18.595+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#164 2025-10-28T16:10:18.597+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#165 2025-10-28T16:10:18.598+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#166 2025-10-28T16:10:18.600+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#167 2025-10-28T16:10:18.602+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#168 2025-10-28T16:10:20.066+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - Bot stopped speaking 2025-10-28T16:10:20.070+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - < TranscriptLogger: BotStoppedSpeakingFrame#29 2025-10-28T16:10:20.128+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - < STT Logger: BotStoppedSpeakingFrame#29

and after this when the user said hello then it resumed and the transcript as in what was said in the call was

User I'm actually calling to verify the dental savings plans with doctor XXX is in network with? Assistant Thank you for your question! Just to User Hello?

You can see these sentences were generated but never spoken by the assistant Just to clarify, our practice is ABCD with Doctor xxxx For dental savings plans and insurance networks, we’re in-network with most major PPO plans We also help many folks with other plans, but not all savings plans are considered insurance, and network status can vary. This way, I can have our team verify if we’re able to accept it and get back to you with the details. Would you like to provide that, or would you like me to take a message for our team to follow up?

and while speaking Thank you for your question! Just to there was 1 pause of 5-6 seconds and other one of 10-15 seconds

Can you help me figure out why is this happening?

Pipecat version: latest (pip)

Python: 3.11

STT: Deepgram Flux

LLM: OpenAI (gpt-4i)

TTS: Cartesia

Transport: FastAPI WebSocket (Twilio)

VAD: SileroVADAnalyzer settings: confidence=0.7, start_secs=0.2, stop_secs=0.8, min_volume=0.6

Regan17 avatar Oct 30 '25 19:10 Regan17

It sounds like there may have been an interruption cause by background noise. This is a common issue and requires a noise cancellation solution to filter out background noises and voices.

You can check your logs to see if there's a User started speaking log that occurs when the bot response stops.

markbackman avatar Oct 31 '25 16:10 markbackman

It sounds like there may have been an interruption cause by background noise. This is a common issue and requires a noise cancellation solution to filter out background noises and voices.

You can check your logs to see if there's a User started speaking log that occurs when the bot response stops.

That's the thing there is not user started speaking frame and noise cancellation AIC filter was on and it has happened multiple times in multiple calls, no interruption or user started speaking frame

Regan17 avatar Oct 31 '25 16:10 Regan17

Something has to cause the bot to stop responding. The mechanism for that is an InterruptionFrame which happens when either the application pushes it or the user starts speaking. The only other failure modes that could exist is that the playback audio is incomplete, but I don't think that's very likely. Or, there's a network issue disrupting the playback; also not very likely.

markbackman avatar Oct 31 '25 17:10 markbackman

these are the logs after that, you can see no interruption when bot stopped speaking, and you can see the timestamps as well, 20 secs after that an interruption frame was there because the call was silent for so long that user had to ask

2025-10-28T16:10:39.554+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - OpenAILLMService#1: Generating chat from LLM-specific context [{xx}] 2025-10-28T16:10:39.566+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: LLMFullResponseStartFrame#19 2025-10-28T16:10:42.068+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - User started speaking 2025-10-28T16:10:42.068+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - PipelineTask#1: received interruption task frame InterruptionTaskFrame#31 2025-10-28T16:10:42.069+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > STT Logger: InterruptionFrame#31 2025-10-28T16:10:42.069+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > STT Logger: UserStartedSpeakingFrame#62 2025-10-28T16:10:42.070+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - OpenAILLMService#1 processing time: 2.516447067260742 2025-10-28T16:10:42.071+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - OpenAILLMService#1 TTFB: 2.5172462463378906 2025-10-28T16:10:42.073+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#183 2025-10-28T16:10:42.074+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#184 2025-10-28T16:10:42.078+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: InterruptionFrame#31 2025-10-28T16:10:42.080+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: UserStartedSpeakingFrame#62 2025-10-28T16:10:42.187+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - User stopped speaking 2025-10-28T16:10:42.187+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - DeepgramFluxSTTService#1 processing time: 0.11854362487792969 2025-10-28T16:10:42.187+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > STT Logger: MetricsFrame#185 2025-10-28T16:10:42.188+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > STT Logger: UserStoppedSpeakingFrame#58 2025-10-28T16:10:42.188+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > STT Logger: TranscriptionFrame#16(user: , text: [Hello?], language: en, timestamp: 2025-10-28T16:10:42.187+00:00) 2025-10-28T16:10:42.190+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: MetricsFrame#185 2025-10-28T16:10:42.193+0000 - DEBUG - CA16237d8e8e54774d5e2f6fa9337e5933 - > TranscriptLogger: UserStoppedSpeakingFrame#58

Regan17 avatar Oct 31 '25 17:10 Regan17