Aleix Conchillo Flaqué
Aleix Conchillo Flaqué
> One good thing about this approach is that whatever we push through `TTSSpeakFrame` will always be part of the context, creating a unified behavior. > > Because with the...
One option is to pass a `text_only_pipeline: bool` to `PipelineParams`. This woul be used when the aggregator is initialized and would result in using the current `LLMFullResponseStartFrame`/`LLMFullResponseEndFrame` behavior.
> One option is to pass a `text_only_pipeline: bool` to `PipelineParams`. This woul be used when the aggregator is initialized and would result in using the current `LLMFullResponseStartFrame`/`LLMFullResponseEndFrame` behavior. But...
> What issue are we trying to solve? Is it that the TTSSpeakFrames aren't added to the context? Yes. We currently, after a recent change, only add them if they...
> Just to confirm before I start reviewing it. > > Based on the last meeting, this PR will be updated so that we can define through settings whether all...
> This looks good to me, pending Mark's one-character change; is there anything else I can help test? If you have time and you want to play with it a...
This is fixed now. The issue was that if there are two or more function calls to be executed in parallel, the first one might get executed right away without...
Btw, Claude tries to avoid executing function calls in parallel so it's harder to try (see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/implement-tool-use#parallel-tool-use). That's why you were getting: - Function call 1 - LLM chat completion...
> This is looking much better! I'm now seeing OpenAI, Anthropic, Gemini, OpenAI Realtime, and Gemini Live all behave expectedly. > > Two things I noticed: > > * Running...
Since we are renaming examples, we might need to update `scripts/evals/run-release-evals.py`