fish-speech icon indicating copy to clipboard operation
fish-speech copied to clipboard

When the 'compile' option is enabled, using uvicorn to start the Python script will cause blocking during inference.

Open steven8274 opened this issue 11 months ago • 4 comments

Self Checks

  • [x] This template is only for bug reports. For questions, please visit Discussions.
  • [x] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
  • [x] I have searched for existing issues, including closed ones. Search issues
  • [x] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
  • [x] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
  • [x] Please do not modify this template and fill in all required fields.

Cloud or Self Hosted

Self Hosted (Source)

Environment Details

Ubuntu 22.04.4 LTS

Steps to Reproduce

Follow steps in 'https://speech.fish.audio/zh/#linux' to set up python env. Follow steps in 'https://github.com/fishaudio/fish-speech/blob/main/inference.ipynb' to download model files. Then run this python code with uvicorn: uvicorn fish_speech_test:app

fish_speech_test.py content:

import os
import torch
from loguru import logger

from fish_speech.inference_engine import TTSInferenceEngine
from fish_speech.models.text2semantic.inference import launch_thread_safe_queue
from fish_speech.models.vqgan.inference import load_model as load_decoder_model
from fish_speech.utils.schema import ServeTTSRequest
from tools.webui.inference import get_inference_wrapper

from fish_speech.inference_engine.utils import normalize_text

import soundfile as sf

os.environ["EINX_FILTER_TRACEBACK"] = "false"
#os.environ["CUDA_VISIBLE_DEVICES"] = "6"

os.environ["TORCH_LOGS"] = "inductor"
os.environ["TORCH_DUMP_GRAPH"] = "1"
os.environ["TORCH_CUDNN_SDPA_ENABLED"]="1"

from fastapi import FastAPI, HTTPException
from fastapi.responses import StreamingResponse, HTMLResponse
from pydantic import BaseModel
import librosa
import numpy as np
import io
import time
import re
import soundfile as sf
import os
import uuid
import logging
import asyncio

device = "cuda"
# Check if MPS or CUDA is available
if torch.backends.mps.is_available():
    device = "mps"
    logger.info("mps is available, running on mps.")
elif not torch.cuda.is_available():
    logger.info("CUDA is not available, running on CPU.")
    device = "cpu"

logger.info("Loading Llama model...")
llama_queue = launch_thread_safe_queue(
    checkpoint_path="checkpoints/fish-speech-1.5",
    device=device,
    precision=torch.bfloat16,
    compile=True,
)

logger.info("Loading VQ-GAN model...")
decoder_model = load_decoder_model(
    config_name="firefly_gan_vq",
    checkpoint_path="checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth",
    device=device,
)

logger.info("Decoder model loaded, warming up...")

# Create the inference engine
inference_engine = TTSInferenceEngine(
    llama_queue=llama_queue,
    decoder_model=decoder_model,
    compile=True,
    precision=torch.bfloat16,
)

# Dry run to check if the model is loaded correctly and avoid the first-time latency
list(
    inference_engine.inference(
        ServeTTSRequest(
            text="Hello world.",
            references=[],
            reference_id=None,
            max_new_tokens=1024,
            chunk_length=200,
            top_p=0.7,
            repetition_penalty=1.5,
            temperature=0.7,
            format="wav",
        )
    )
)

app = FastAPI()

✔️ Expected Behavior

Inference complete successfully.

❌ Actual Behavior

Inferencing is blocked forever.

steven8274 avatar Jan 17 '25 09:01 steven8274

I face the same issue. Any progress?

quang-vo-ds avatar Feb 20 '25 03:02 quang-vo-ds

Same issue. Enable compile flag, will cause slooooooow inference.

shengzhou1216 avatar Jun 20 '25 11:06 shengzhou1216

It is normal to generate the first sample under compile mode. It will be much faster to generate following samples. What is the detail log of our API?

Whale-Dolphin avatar Sep 21 '25 06:09 Whale-Dolphin