jeremy110

Results 82 comments of jeremy110

maybe you can try this https://huggingface.co/pyannote/segmentation

How many hours did you train on? In my case, it took about 5~8 hours data to train, but I haven't tried fewer hours. This is my loss images, training...

In my case, i use ipa to training a new language. You need to 1. change the bert model for your language 2. change g2p code 3. add tones and...

@jadechip hello~ I using two datasets. 1. 1 speaker 10h 2. 14 speaker, each speakers 30 minutes training 300 epochs , loss(35~50) you can refer this https://github.com/myshell-ai/MeloTTS/issues/67

@jadechip Don't metion it,You can also refer this https://github.com/myshell-ai/MeloTTS/issues/83

@Yongbi9 New discussion in https://github.com/myshell-ai/MeloTTS/issues/120

@walletiger The basic architectures are mostly the same, but Melotts has more languages and uses IPA. The latest BERT-VITS2 has added WAVLM and emotion to the basic architecture. Earlier versions...

先準備metadata.list然後呼叫preprocess_text.py,最後會生成train.list及val.list,data_util.py是讀這兩個

> 请问,有台湾那边训练的声音吗?目前的chinese对台湾口音支持的并不好,不像。 如果有,能分享一下训练方法吗 抱歉,我手邊都是私有的資料集無法提供給你。 訓練方式基本上可以照教學去做即可。

@zhjygit 抱歉我無法提供。另外台灣幾乎沒有公開資料集,也無法提供連結給你。 G_.pth是你訓練的checkpoint,如果你還沒開始訓練你不會看到