Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming
Self Checks
- [X] This template is only for bug reports. For questions, please visit Discussions.
- [X] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
- [X] I have searched for existing issues, including closed ones. Search issues
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template and fill in all required fields.
Cloud or Self Hosted
Self Hosted (Source)
Environment Details
linux
Steps to Reproduce
Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming
✔️ Expected Behavior
Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming
❌ Actual Behavior
Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming
This "streaming:true" means: generating a file that can be streamed, rather than streaming the file back to you. you can transfer this file to your client in a streamed format. At least that's how I understand it.
In that case, wouldn't it be a fake stream? If we stream the file output, the response will be greatly improved
I don't know how to access the real stream, I split an article into several paragraphs according to punctuation, and start streaming when the first text is generated, and dynamically append the other paragraphs to the first audio after completion, so it won't wait long even if the article is very long.
I did the same thing, splitting sentences and cloning in batches, but I am working on a low latency real-time interaction example. If I could do true streaming, the experience would be faster
yes, there must be some way, let me know if you find it, please
from fish_audio_sdk import WebSocketSession, TTSRequest, ReferenceAudio
import time
text_use = "After reading Philippians 4:4-9, you might have noticed that Paul, despite his difficult circumstances, doesn’t ask for escape—he asks for peace. It’s amazing to think that Paul wrote this letter while he was in prison, and yet, his focus wasn’t on his suffering but on encouraging others to rejoice in the Lord and not to be anxious. The key here is in the advice he gives: instead of worrying, we’re told to bring our concerns to God in prayer. It’s not a promise that life will be free of troubles, but a reminder that peace doesn’t come from the absence of problems—it comes from God’s presence in our lives. One verse that really stands out is verse 7, where Paul talks about the peace of God guarding our hearts and minds. Imagine that—God’s peace is like a shield, protecting you from anxiety’s grip. Have you ever noticed how, when you take your worries to God, your problems may not immediately disappear, but your perspective shifts? What if you made it a habit to turn to God with everything, both big and small? Reflect on how this might change your daily life. As we close today's session, remember this: God's peace is available to you right now, in whatever situation you’re facing. Let's carry that peace with us as we move forward."
print(len(text_use))
start = time.time()
print(start)
sync_websocket = WebSocketSession("xxxxxx")
def stream():
text = text_use
for line in text.split():
yield line + " "
tts_request = TTSRequest(
text=text_use,
reference_id="536b85cafa3542c6860bb8a189d20308"
)
# Or you can use reference audio
# tts_request = TTSRequest(
# text="",
# references=[
# ReferenceAudio(
# audio=open("lengyue.wav", "rb").read(),
# text="Text in reference AUDIO",
# )
# ]
# )
with open("output_websocket22233.mp3", "wb") as f:
for chunk in sync_websocket.tts(
tts_request,
stream() # Stream the text
):
print(chunk)
f.write(chunk)
end = time.time()
print(end)
print(int(end - start))
Sorry, according to my understanding, your approach in this paragraph is to split the sentence and gradually call TTS in batches to create a "fake stream". However, what I want is the stream on the API, which means that when I give him a sentence TTS processing, he immediately returns a high-frequency stream to me after processing the first word
Exactly! Did you get any further? do the other parts allow this?
@Rosejacka Did you find any way to scale the model on a single gpu instance using the workers? ( A100 80GB vram Fine-tuned model using 2.5-3 GB vram and If I use 8 workers the model is loading on the GPU separately and using 24 GB vram )They are not giving the response parallely instead they are working as a queue and giving the response one by one
@tarun7r Did you have any success? Currently trying to do batch inference as well
anyone can deal this question?
Yea I'm also trying to stream, but it is rubbish since it only sends seemingly once all the audio is already generated. that's obviously not a stream!
Is there a good solution?
Hi @PoTaTo-Mika,
In Fishspeech 1.5 model codebase, the streaming code is not working in desired way as reported by many above. Any suggestions on how to make it work in proper streaming way?
Is there any solution? I need that too.
Thanks for all your reply, we'll test the function if it works
Obviously the stream function returns trunk-like data, the LLM backbone supports streaming but vocoder currently not, as it isn't an autoregressive model.