hifigan-denoiser icon indicating copy to clipboard operation
hifigan-denoiser copied to clipboard

postnet parameters

Open ghost opened this issue 4 years ago • 8 comments

I noticed that the postnet filter size is 32, which makes the output have different shapes than the input. Also, the dropout rate is so high that it's not learning anything meaningful. Is that supposed to be like this?

ghost avatar Apr 09 '21 03:04 ghost

@capavrulus I strictly following paper details here, although you can change dropout here as it's not mentioned in paper explicitly.

rishikksh20 avatar Apr 12 '21 05:04 rishikksh20

I have the same issue that after each layer of Postnet, the sequence length decrease, this lead to the y_g_hat have difference size and miss match with the ground truth ones.

image

To handle this problem, I change padding to 'same' in torch 1.9

v-nhandt21 avatar Jan 04 '22 03:01 v-nhandt21

hello @v-nhandt21, did you change the padding in every layer of the postnet model from padding=(n_filts - 1) // 2, to padding=same ? Thank you

SupreethRao99 avatar Aug 16 '22 02:08 SupreethRao99

hello @v-nhandt21, did you change the padding in every layer of the postnet model from padding=(n_filts - 1) // 2, to padding=same ? Thank you

Hi, bro: Have you tried this successfully? I also encountered the same problem, the dimensions are not correct

velonica0 avatar Oct 04 '22 14:10 velonica0

@SupreethRao99 @velonica0 I can not remember exactly what I have done with my code, I have cleaned it

But you can check the padding in this ConvNorm class: https://www.tutorialexample.com/keeping-the-shape-of-input-and-output-same-in-pytorch-conv1d-pytorch-tutorial/ image

we could try this to keep the same shape: https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html image

v-nhandt21 avatar Oct 05 '22 08:10 v-nhandt21

hello @v-nhandt21, did you change the padding in every layer of the postnet model from padding=(n_filts - 1) // 2, to padding=same ? Thank you

Hi, bro:

Have you tried this successfully? I also encountered the same problem, the dimensions are not correct

Yes , I think i was able to get past the issue , but the model performance was horrible to say the least even after training with the full dataset on multiple GPU's for the full 1Million training steps , the models performance didn't improve, which is why I moved on

SupreethRao99 avatar Oct 05 '22 10:10 SupreethRao99

@SupreethRao99 @velonica0 I can not remember exactly what I have done with my code, I have cleaned it

But you can check the padding in this ConvNorm class:

https://www.tutorialexample.com/keeping-the-shape-of-input-and-output-same-in-pytorch-conv1d-pytorch-tutorial/

image

we could try this to keep the same shape:

https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html

image

Hi, thanks. I resorted to using 'padding = same' to overcome the issue in PyTorch 1.12

SupreethRao99 avatar Oct 05 '22 10:10 SupreethRao99

@v-nhandt21 @SupreethRao99 Thank you for your help. I now use my own Raman spectrum data to train for one million steps, and the Gen Loss Total is 4.7. How can I reduce the loss? In addition, is there any related paper code that uses GAN for one-dimensional data denoising or restoration? I want to learn it, thank you very much.

velonica0 avatar Oct 06 '22 06:10 velonica0