10x
10x copied to clipboard
Consider using stream for faster TTS response
Use stream to pass into eleven-labs for early TTS, e.g.
import { openai } from '@ai-sdk/openai'
import { StreamingTextResponse, streamText } from 'ai'
export async function POST(req) {
const { messages } = await req.json()
const result = await streamText({
model: openai('gpt-4-turbo'),
messages,
})
return new StreamingTextResponse(result.toAIStream())
}
and elevenlabs TTS streaming api
I think currently the app is serializing prompt and TTS? With streaming, it should significantly reduce the latency