I am trying to run a feedback loop on the assistant’s last message. How do I use the LLMFullResponseEndFrame frame for that?
pipecat version
0.0.87
Python version
3.12
Operating System
No response
Question
I have a usecase where I want to analyse llm full response in realtime and store it in cache so I created a custom frame processor to do that. (Added it as second last layer above transport.output()).
`async def process_frame(self, frame: Frame, direction: FrameDirection): """ Invoked for every frame; runs context analysis for assistant messages. """ # CRITICAL: Must call parent's process_frame await super().process_frame(frame, direction)
# Run analysis in background (non-blocking)
if isinstance(frame, TTSStoppedFrame) or isinstance(frame, LLMFullResponseEndFrame):
asyncio.create_task(self._analyze_context_async(frame))
# CRITICAL: Always push frame through to next processor
await self.push_frame(frame, direction)`
But it is not able to use LLMFullResponseEndFrame, it is only using TTSStoppedFrame. Can you share how can I use LLMFullResponseEndFrame here?
What I've tried
No response
Context
No response
Instead of handling both frames, why not just depend on the LLMFullResponseEndFrame?
The LLM's response is bookended by the LLMFullResponseEndFrame and LLMFullResponseEndFrame. Given the positioning of the processor, everything in between will be a TTSTextFrame. You can aggregate the TTSTextFrames and then do some processing on it.
For my pipeline, frame is never of type of LLMFullResponseEndFrame. Do we push it to pipeline at some place? Because it is not getting generated via llm layer?
The LLMFullResponseEndFrame comes from the LLM. You can hook up Whisker to see the frame flow through your Pipeline:
https://github.com/pipecat-ai/whisker
This will help you see if you're blocking the frame anywhere.