Tarun Garg
Tarun Garg
@chadbailey59 were we able to get anywhere from here? Did 11labs respond?
Is this ElevenLabs or LLM? LLM is the one who usually produces these outputs.
@markbackman any timeline for this fix?
@markbackman @aconchillo any insight on > I have tried one solution by increasing the stop_time_sec in TTS service stop_frame_handler which results in waiting for 7-8 secs instead of 2 secs....