Harsh Agarwal
Harsh Agarwal
same here!
yes! May be just G, D and GP. PS: I don't see PL in my logs am I doing something wrong?
any updates here?
https://github.com/dunbar12138/Audiovisual-Synthesis/blob/master/model_vc.py#L631 ``` codes, code_unsample = self.encoder(mel, speaker, return_unsample=True) ``` This removes the speaker embedding argument from all the `self.encoder` function calls. Apparently the encoder definition state it is def without...
Have you changed the optimizer? What data are you training on?
Decreasing the dropout it gets better that means it's working as expected...so no worries it's all about hyper parameter tuning :) How many epochs have you trained the network for...
Hi, Did you try decreasing the learning rate? That might just solve the issue as I had said...before the curve that I showed you my training curve was like this...
If your dropout rate is high essentially you are asking the network to suddenly unlearn stuff and relearn it by using other examples. Decreasing the drop out makes sure not...