inference
inference copied to clipboard
BUG ChatTTS Internal Server Error
Describe the bug
ChatTTS部署成功,界面可以看到。
日志: 2024-07-04 00:42:25,572 xinference.model.utils 97 INFO Use model cache from a different hub. 2024-07-04 00:42:26,514 xinference.thirdparty.ChatTTS.core 16784 INFO Load from local: /root/.xinference/cache/ChatTTS 2024-07-04 00:42:27,649 xinference.thirdparty.ChatTTS.core 16784 INFO use cuda:0 2024-07-04 00:42:28,084 xinference.thirdparty.ChatTTS.core 16784 INFO vocos loaded. 2024-07-04 00:42:28,158 xinference.thirdparty.ChatTTS.core 16784 INFO dvae loaded. 2024-07-04 00:42:31,827 xinference.thirdparty.ChatTTS.core 16784 INFO gpt loaded. 2024-07-04 00:42:32,877 xinference.thirdparty.ChatTTS.core 16784 INFO decoder loaded. 2024-07-04 00:42:32,889 xinference.thirdparty.ChatTTS.core 16784 INFO tokenizer loaded. 2024-07-04 00:42:32,889 xinference.thirdparty.ChatTTS.core 16784 INFO All initialized.
但使用curl
curl -X 'POST' \
'http://0.0.0.0:9997/v1/audio/speech' \
-H 'accept: application/json' \
-H 'Content-Type: application/json' \
-d '{
"model": "ChatTTS",
"text": "Hello",
"voice": "echo",
}'
返回: Internal Server Error
To Reproduce
To help us to reproduce this bug, please provide information below:
- Your Python version. 3.10.13
- The version of xinference you use. 0.12.3
- Versions of crucial packages.
- Full stack of the error.
- Minimized code to reproduce the error.
- docker image xprobe/xinference:v0.12.3
Expected behavior
返回语音mp3