Inference does not work in webui
Self Checks
- [X] This template is only for bug reports. For questions, please visit Discussions.
- [X] I have thoroughly reviewed the project documentation (installation, training, inference) but couldn't find information to solve my problem. English 中文 日本語 Portuguese (Brazil)
- [X] I have searched for existing issues, including closed ones. Search issues
- [X] I confirm that I am using English to submit this report (我已阅读并同意 Language Policy).
- [X] [FOR CHINESE USERS] 请务必使用英文提交 Issue,否则会被关闭。谢谢!:)
- [X] Please do not modify this template and fill in all required fields.
Cloud or Self Hosted
Self Hosted (Source)
Environment Details
ubuntu 24.04, torch 2.4.1, gradio 5.9.1, python 3.10
Steps to Reproduce
$git clone https://github.com/fishaudio/fish-speech.git $cd fish-speech $huggingface-cli download fishaudio/fish-speech-1.5 --local-dir checkpoints/fish-speech-1.5/ $RADIO_SERVER_NAME=0.0.0.0 GRADIO_SHARE=True python tools/run_webui.py --llama-checkpoint-path "checkpoints/fish-speech-1.5" --decoder-checkpoint-path "checkpoints/fish-speech-1.5/firefly-gan-vq-fsq-8x1024-21hz-generator.pth" --decoder-config-name firefly-gan_vq
✔️ Expected Behavior
Inference can work in webui
❌ Actual Behavior
Inference does not work in webui, and no error
@Picus303 Could you have a look at it?
@Picus303 Could you have a look at it?
I'll try to reproduce it. I think it happened once while I was doing tests but never happened again.
You're running without CUDA. Bandwidth and inference speed are too slow
You're running without CUDA. Bandwidth and inference speed are too slow
Thank you for your reply. How to running webui with CUDA?
@yuzifu Use --device cuda as a command line argument when starting the webui, but I think it's enabled by default.
I can't reproduce this issue for the moment. I guess I'll end up getting it again and then be able to investigate it, but for now I'm not really making progress.
docker run -ti --runtime=nvidia -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all -p 127.0.0.1:7860:7860 fishaudio/fish-speech
docker run -ti --runtime=nvidia -e NVIDIA_DRIVER_CAPABILITIES=compute,utility -e NVIDIA_VISIBLE_DEVICES=all -p 127.0.0.1:7860:7860 fishaudio/fish-speech
Running in docker can work normally.