WaveRNN-TF
WaveRNN-TF copied to clipboard
Training is quite slow?
hi @MlWoo, I tried to train the model with the default configuration (batch size 2, max time step 16000). It costs about 38 sec per step. Is that normal? Could you please provide some samples? thx
hi, When synthesizing, when I load the model, I always report Key Encoder/Affine/bias_Affine not found in checkpoint. How do you get it when you synthesize?
@QueenKeys I am sorry to tell you said that it is a practice of custom RNN of TF. I have not mantained the repo for a long time.
@HallidayReadyOne training_batch_size: 48 max_time_steps: 13000 about 14s/step on 1080 ti.
Actually, It is just a practice of custom RNN of TF. I have no samples of it.
When synthesizing, do you run test.py directly? I saw apples inside.,this is the parameter stored during the training.?Which side of the code is there? I rewrite the synthesis code according to the tacotron method, but still can't synthesize it. If it is convenient, can you provide the synthesis code?thank you!------------------ 原始邮件 ------------------ 发件人: "Menglin Wu"[email protected] 发送时间: 2019年5月22日(星期三) 上午8:40 收件人: "MlWoo/WaveRNN-TF"[email protected]; 抄送: "QueenKeys"[email protected];"Mention"[email protected]; 主题: Re: [MlWoo/WaveRNN-TF] Training is quite slow? (#1)
@HallidayReadyOne training_batch_size: 48 max_time_steps: 13000 about 14s/step on 1080 ti.
Actually, It is just a practice of custom RNN of TF. I have no samples of it.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or mute the thread.
@QueenKeys I am ready to resume the repo. Obviously, we should implement the cell in C level to boost the training. It will be trivial and hard. If you have any questions or adavance, please tell me. thanks a lot.