Yi Ren
Yi Ren
Hi, we have no plan to support BZNSYP in the short term. It should be very easy to apply our models to other datasets in our framework. PortaSpeech does not...
Sure, you can convert our models to [ONNX](https://pytorch.org/docs/stable/onnx.html) and deploy them with [ONNXRuntime](https://github.com/microsoft/onnxruntime).
1. Add a Polish text processor (g2p, text normalization ......) like`data_gen/tts/txt_processors/polish.py` 2. Add a config file in `datasets/audio/XXX_Dataset/ps_flow.yaml`, with the following content: ```yaml base_config: - egs/egs_bases/tts/ps_flow.yaml raw_data_dir: data/raw/XXXX processed_data_dir: data/processed/XXX...
We didn't compare with VITS in our paper quantitatively. Perceptually, the sound quality and prosody of PortaSpeech and DiffSpeech are similar with VITS. The differences between our models and VITS...
> i think PortaSpeech + MB MelGAN can work on cpu very well, VITS need gpu to work. Yes, you can use other vocoders with PortaSpeech.
Not recently. We will inform you if other languages are added someday.
`inference/gradio/infer.py` is used for [huggingface demo](https://huggingface.co/spaces/NATSpeech/PortaSpeech). If you want to infer in the terminal, you can run this: `CUDA_VISIBLE_DEVICES=0 python inference/tts/ps_flow.py --exp_name ps_normal_exp`. The input text can be changed in...
在文件`blob/main/_sass/_variables.scss`里修改$sans-serif即可