sgan icon indicating copy to clipboard operation
sgan copied to clipboard

D_data_loss and G_discriminator_loss don't change

Open agoodge opened this issue 5 years ago • 15 comments

As in the title, the adversarial losses don't change at all from 1.398 and 0.693 resepectively after roughly epoch 2 until end. Though G_l2_loss does change. Any ideas whats wrong? I've tried changing hyperparameters to those given in the pretrained models as suggested in a previous thread.

agoodge avatar Jul 01 '19 08:07 agoodge

I met this problem as well. Have u figured out what is wrong?

PhyllisH avatar Jul 02 '19 14:07 PhyllisH

Have not :(

agoodge avatar Jul 03 '19 01:07 agoodge

You could change the parameter 'l2_loss_weight'. Then the loss would change.

PhyllisH avatar Jul 03 '19 03:07 PhyllisH

You mean reduce the weight of l2_loss? that would encourage the adversarial loss to decrease?

agoodge avatar Jul 03 '19 04:07 agoodge

I mean that you could change the default value of 'args.l2_loss_weight'.

PhyllisH avatar Jul 03 '19 05:07 PhyllisH

However, the D_data_loss and G_discriminator_loss do not change after several epochs from 1.386 and 0.693 while other losses keep changing.

PhyllisH avatar Jul 03 '19 09:07 PhyllisH

Same question here. My loss doesn't change.

cuihenggang avatar Aug 20 '19 21:08 cuihenggang

I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.

JackFram avatar Nov 05 '19 01:11 JackFram

l2_loss_weight

change to what

ZhoubinXM avatar Nov 12 '19 14:11 ZhoubinXM

Even if I replace ReLU with LeakyReLU, the losses do not change basically.

Yuliang-Zou avatar Nov 21 '19 01:11 Yuliang-Zou

Even if I replace ReLU with LeakyReLU, the losses do not change basically.

U can change the L2_loos_weight. It could be help.

ZhoubinXM avatar Nov 21 '19 02:11 ZhoubinXM

I have met the same problem,even if I set the l2_liss_weight to 1, the adversarial losses didn't change yet and it was still 1.386 and 0.693.

zpp960807 avatar Jun 11 '20 09:06 zpp960807

If running on the Windows operating system, all parameters should be based on the run_ traj. sh. You cannot run the train.py program directly

buzhanpeng avatar Nov 22 '22 14:11 buzhanpeng

I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.

Yes, using LeakyReLU could help change the D_loss and G_loss. But it seems that the evaluate results of relu and leakyRelu make no difference. Both of them can get reasonable results.

neugzy avatar Mar 03 '24 03:03 neugzy

I found out this could be due to the activation function of discriminator is ReLU, and the weight initialization would lead the output be 0 at the beginning, and since ReLU output 0 for all negative value, so gradient is 0 as well. Simply change discriminator's real_classifier's activation function to LeakyReLU could help.

Yes, using LeakyReLU could help change the D_loss and G_loss. But it seems that the evaluate results of relu and leakyRelu make no difference. Both of them can get reasonable results.

I changed relu to leakyrelu both in the generator and the discriminator, the loss did chenged, but it's very strange, Is this normal? image

sjx3906 avatar May 19 '24 01:05 sjx3906