Arthur Wu
Arthur Wu
``` Traceback (most recent call last): File "/home/wac/fast-bert/single_classifier.py", line 76, in optimizer_type="lamb") File "/home/wac/fast-bert/fast_bert/learner_cls.py", line 405, in fit results = self.validate() File "/home/wac/fast-bert/fast_bert/learner_cls.py", line 523, in validate all_logits, all_labels File...
i have same issue
but your paper output is model = G2pM() model('今天来的目的是什么?') output:['jin1', 'tian1', 'lai2', 'de5', 'mu4', _**'di4'**_, 'shi4', 'shen2', 'me5', '?']
use cpm2.1 just try generate,but not good ``` input: text = "圆圆的月儿天上挂,圆圆的月饼香天涯,圆圆的快乐美如花,圆圆的祝福到你家:美满日子玉润珠圆,幸福生活花好月圆,合家吉祥永团圆。 " output: ('欢乐节日圆团建,每天健康、平安丽日长日久正,好好价钱苟得长辈羊肉,比肩为知友”,故以番语改编为庆生“。【美女同事】美人肯定是美的了,干脆用“爱司机”儿装个美女,然后自己夜不归宿,想着让家里人趁机感受。\n', True) ``` how to impove?
yes, you have to add this arg in config file. I provide a chinese example model in this repo. https://github.com/wac81/vits_chinese
retrain PWG on base model ? or train a new PWG model with 50 sentences?
Multi-band MelGAN is more suitable for small data like 50 sentences?compare to PWG?
ok,thank you i must train a model with MB-MelGAN + PWG discriminator for the base model first? then can be finetune? i have a MB-MelGAN base model.
I try the PWG finetune on-base PWG model with 70 sentences. steps from default 400000 to 405000 but sound like the original model , any idea?
may be still finetune TTS model?