Dyongh613

Results 7 issues of Dyongh613

Hi@keonlee9420, after the linguistic encoder is implemented, the text is input to the character embedding layer, and the output value contains Nan. How to solve this problem? ![image](https://user-images.githubusercontent.com/94910118/183089807-105bf080-6310-4489-baf7-e685016c6b61.png)

HI@[keonlee9420],I cannot understand the meaning of inputs[11:] in model.loss.py def forward(self, inputs, predictions, step): ( mel_targets, *_, ) = inputs[11:] Thank you very much!

Who can share the pre-trained model which is the AISHELL3

HI@keonlee9420, I have some questions to ask you about the mel-spectrogram. In the picture, ![image](https://user-images.githubusercontent.com/94910118/176336644-a71a4bae-117b-4557-9dfb-ec8b32ebe3f1.png) The above mel-spectrogram alignment has been generated, but the horizontal details have not been released...

PS D:\项目\WaveVAE-master> python train.py --model_name wavevae_1 --batch_size 4 --num_gpu 2 Traceback (most recent call last): File "train.py", line 10, in import librosa File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\librosa\__init__.py", line 209, in from . import...

File "train.py", line 122, in main model_update(model, step, G_loss, optG_fs2) File "train.py", line 77, in model_update loss = (loss / grad_acc_step).backward() File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py", line 221, in backward torch.autograd.backward(self, gradient, retain_graph,...

Hi@keonlee9420, I encountered some problems during the training stage. I often have loss functions that occasionally fluctuate a lot during training, even from around 3 to tens or hundreds. After...