fish-speech icon indicating copy to clipboard operation
fish-speech copied to clipboard

Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming

Open Rosejacka opened this issue 11 months ago • 18 comments

Self Checks

  • [X] This template is only for bug reports. For questions, please visit Discussions.
  • [X] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
  • [X] I have searched for existing issues, including closed ones. Search issues
  • [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [X] Please do not modify this template and fill in all required fields.

Cloud or Self Hosted

Self Hosted (Source)

Environment Details

linux

Steps to Reproduce

Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming image

✔️ Expected Behavior

Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming

❌ Actual Behavior

Why is the streaming parameter of the TTS interface set to True, which actually returns all fragments instead of streaming

Rosejacka avatar Jan 13 '25 02:01 Rosejacka

This "streaming:true" means: generating a file that can be streamed, rather than streaming the file back to you. you can transfer this file to your client in a streamed format. At least that's how I understand it.

Haoran1272 avatar Jan 13 '25 10:01 Haoran1272

In that case, wouldn't it be a fake stream? If we stream the file output, the response will be greatly improved

Rosejacka avatar Jan 13 '25 10:01 Rosejacka

I don't know how to access the real stream, I split an article into several paragraphs according to punctuation, and start streaming when the first text is generated, and dynamically append the other paragraphs to the first audio after completion, so it won't wait long even if the article is very long.

Haoran1272 avatar Jan 13 '25 10:01 Haoran1272

I did the same thing, splitting sentences and cloning in batches, but I am working on a low latency real-time interaction example. If I could do true streaming, the experience would be faster

Rosejacka avatar Jan 13 '25 10:01 Rosejacka

yes, there must be some way, let me know if you find it, please

Haoran1272 avatar Jan 13 '25 10:01 Haoran1272

from fish_audio_sdk import WebSocketSession, TTSRequest, ReferenceAudio
import time

text_use = "After reading Philippians 4:4-9, you might have noticed that Paul, despite his difficult circumstances, doesn’t ask for escape—he asks for peace. It’s amazing to think that Paul wrote this letter while he was in prison, and yet, his focus wasn’t on his suffering but on encouraging others to rejoice in the Lord and not to be anxious. The key here is in the advice he gives: instead of worrying, we’re told to bring our concerns to God in prayer. It’s not a promise that life will be free of troubles, but a reminder that peace doesn’t come from the absence of problems—it comes from God’s presence in our lives. One verse that really stands out is verse 7, where Paul talks about the peace of God guarding our hearts and minds. Imagine that—God’s peace is like a shield, protecting you from anxiety’s grip. Have you ever noticed how, when you take your worries to God, your problems may not immediately disappear, but your perspective shifts? What if you made it a habit to turn to God with everything, both big and small? Reflect on how this might change your daily life. As we close today's session, remember this: God's peace is available to you right now, in whatever situation you’re facing. Let's carry that peace with us as we move forward."
print(len(text_use))
start = time.time()
print(start)
sync_websocket = WebSocketSession("xxxxxx")

def stream():
    text = text_use
    for line in text.split():
        yield line + " "

tts_request = TTSRequest(
    text=text_use,
    reference_id="536b85cafa3542c6860bb8a189d20308"
)

# Or you can use reference audio
# tts_request = TTSRequest(
#     text="",
#     references=[
#         ReferenceAudio(
#             audio=open("lengyue.wav", "rb").read(),
#             text="Text in reference AUDIO",
#         )
#     ]
# )

with open("output_websocket22233.mp3", "wb") as f:
    for chunk in sync_websocket.tts(
            tts_request,
            stream()  # Stream the text
    ):
        print(chunk)
        f.write(chunk)

end = time.time()
print(end)
print(int(end - start))

hakusai22 avatar Jan 16 '25 01:01 hakusai22

Sorry, according to my understanding, your approach in this paragraph is to split the sentence and gradually call TTS in batches to create a "fake stream". However, what I want is the stream on the API, which means that when I give him a sentence TTS processing, he immediately returns a high-frequency stream to me after processing the first word

Rosejacka avatar Jan 16 '25 01:01 Rosejacka

Exactly! Did you get any further? do the other parts allow this?

Don-Chad avatar Jan 24 '25 18:01 Don-Chad

@Rosejacka Did you find any way to scale the model on a single gpu instance using the workers? ( A100 80GB vram Fine-tuned model using 2.5-3 GB vram and If I use 8 workers the model is loading on the GPU separately and using 24 GB vram )They are not giving the response parallely instead they are working as a queue and giving the response one by one

tarun7r avatar Mar 20 '25 12:03 tarun7r

@tarun7r Did you have any success? Currently trying to do batch inference as well

mizoru avatar Apr 08 '25 17:04 mizoru

anyone can deal this question?

zbbwss avatar Apr 23 '25 03:04 zbbwss

Yea I'm also trying to stream, but it is rubbish since it only sends seemingly once all the audio is already generated. that's obviously not a stream!

corporate9601 avatar May 10 '25 10:05 corporate9601

Is there a good solution?

suxuanning avatar Jun 16 '25 09:06 suxuanning

Hi @PoTaTo-Mika,

In Fishspeech 1.5 model codebase, the streaming code is not working in desired way as reported by many above. Any suggestions on how to make it work in proper streaming way?

akashsonowal avatar Aug 17 '25 19:08 akashsonowal

Is there any solution? I need that too.

chaopengio avatar Sep 04 '25 14:09 chaopengio

Thanks for all your reply, we'll test the function if it works

Whale-Dolphin avatar Sep 21 '25 05:09 Whale-Dolphin

Obviously the stream function returns trunk-like data, the LLM backbone supports streaming but vocoder currently not, as it isn't an autoregressive model.

PoTaTo-Mika avatar Sep 22 '25 03:09 PoTaTo-Mika