sgan
sgan copied to clipboard
D_data_loss and G_discriminator_loss don't change
As in the title, the adversarial losses don't change at all from 1.398 and 0.693 resepectively after roughly epoch 2 until end. Though G_l2_loss does change. Any ideas whats wrong? I've tried changing hyperparameters to those given in the pretrained models as suggested in a previous thread.
I met this problem as well. Have u figured out what is wrong?
Have not :(
You could change the parameter 'l2_loss_weight'. Then the loss would change.
You mean reduce the weight of l2_loss? that would encourage the adversarial loss to decrease?
I mean that you could change the default value of 'args.l2_loss_weight'.
However, the D_data_loss and G_discriminator_loss do not change after several epochs from 1.386 and 0.693 while other losses keep changing.
Same question here. My loss doesn't change.
I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.
l2_loss_weight
change to what
Even if I replace ReLU with LeakyReLU, the losses do not change basically.
Even if I replace ReLU with LeakyReLU, the losses do not change basically.
U can change the L2_loos_weight. It could be help.
I have met the same problem,even if I set the l2_liss_weight to 1, the adversarial losses didn't change yet and it was still 1.386 and 0.693.
If running on the Windows operating system, all parameters should be based on the run_ traj. sh. You cannot run the train.py program directly
I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.
Yes, using LeakyReLU could help change the D_loss and G_loss. But it seems that the evaluate results of relu and leakyRelu make no difference. Both of them can get reasonable results.
I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.
Yes, using LeakyReLU could help change the D_loss and G_loss. But it seems that the evaluate results of relu and leakyRelu make no difference. Both of them can get reasonable results.
I changed relu to leakyrelu both in the generator and the discriminator, the loss did chenged, but it's very strange, Is this normal?