a897456
a897456
loss是在training_step和validation_step产生,但是在main()中好像没有loss的字眼。请问是怎么back的?
怎么验证模型呢?
你的test_step()函数中的batch和training_step()应该不一样了吧?怎么去测试训练完的模型呢?
  d_loss has not changed since epoch 1 to 35. What is the problem??????
  What's going on here? How can I reduce the memory?
How does this DiscriminatorP(2)/P(3)...P(11) function convert one-dimensional data into 2/3...11-dimensional data,?
(conv_pre): Conv1d(80, 512, kernel_size=(7,), stride=(1,), padding=(3,)) (0): ConvTranspose1d(512, 256, kernel_size=(16,), stride=(8,), padding=(4,)) (1): ConvTranspose1d(256, 128, kernel_size=(16,), stride=(8,), padding=(4,)) (2): ConvTranspose1d(128, 64, kernel_size=(4,), stride=(2,), padding=(1,)) (3): ConvTranspose1d(64, 32, kernel_size=(4,), stride=(2,), padding=(1,))...
in models.py self.resblocks = nn.ModuleList() for i in range(len(self.ups)): ch = h.upsample_initial_channel//(2**(i+1)) for j, (k, d) in enumerate(zip(h.resblock_kernel_sizes, h.resblock_dilation_sizes)): self.resblocks.append(resblock(h, ch, k, d)) class ResBlock1(torch.nn.Module): def __init__(self, h, channels, kernel_size=3,...
How to generate mel-spectrograms in numpy format using Tacotron2 with teacher forcing?
Tacotron2 is too old, and it runs very slow .Each epoch takes 2 hours, 5000 Epochs take 5 years
 请问这是什么问题?全网也找不到类似的解决方案