Dyongh613
Dyongh613
Hi@keonlee9420, after the linguistic encoder is implemented, the text is input to the character embedding layer, and the output value contains Nan. How to solve this problem? 
HI@[keonlee9420],I cannot understand the meaning of inputs[11:] in model.loss.py def forward(self, inputs, predictions, step): ( mel_targets, *_, ) = inputs[11:] Thank you very much!
Who can share the pre-trained model which is the AISHELL3
HI@keonlee9420, I have some questions to ask you about the mel-spectrogram. In the picture,  The above mel-spectrogram alignment has been generated, but the horizontal details have not been released...
PS D:\项目\WaveVAE-master> python train.py --model_name wavevae_1 --batch_size 4 --num_gpu 2 Traceback (most recent call last): File "train.py", line 10, in import librosa File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\librosa\__init__.py", line 209, in from . import...
File "train.py", line 122, in main model_update(model, step, G_loss, optG_fs2) File "train.py", line 77, in model_update loss = (loss / grad_acc_step).backward() File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph,...
Hi@keonlee9420, I encountered some problems during the training stage. I often have loss functions that occasionally fluctuate a lot during training, even from around 3 to tens or hundreds. After...