MeloTTS
MeloTTS copied to clipboard
inference time on cpu
Inferencing a sentence with 10 words cost 12s on average on CPU. Any idea to improve the inference time on cpu?
hi @FollowingT, did you considered run inference on integrated GPU? I have created this PR to enable it, maybe you can take a look.
It is because this is VITS architecture—but the bottleneck is BERT encoder (read the code). Encoding is slow + yeah, try using clear VITS and see if it improves (it does)