SRGAN icon indicating copy to clipboard operation
SRGAN copied to clipboard

A question about the weight of the loss

Open syan1992 opened this issue 7 years ago • 8 comments

g_gan_loss = 1e-3 * tl.cost.sigmoid_cross_entropy(logits_fake, tf.ones_like(logits_fake), name='g')
mse_loss = tl.cost.mean_squared_error(net_g.outputs , t_target_image, is_mean=True)
vgg_loss = 2e-6 * tl.cost.mean_squared_error(vgg_predict_emb.outputs, vgg_target_emb.outputs, 
                              is_mean=True)

Could you pls tell me the reason that why you set 1e-3 and 2e-6 as the weight of the loss? Thanks.

syan1992 avatar Jan 03 '18 01:01 syan1992

it is for balancing, you can find the answer on the paper.

zsdonghao avatar Jan 08 '18 17:01 zsdonghao

g_loss = mse_loss + vgg_loss + g_gan_loss Sorry for bothering.I did not mention the author combine the mse_loss with vgg_loss,it will have a better performance?

normalworld avatar Jan 23 '18 13:01 normalworld

yes, it will have better performance

wagamamaz avatar Mar 12 '18 14:03 wagamamaz

The paper states that the VGG feature maps were rescaled by 1/12.75 which is equivalent to multiplying the VGG loss by approx 0.006. The value in the code is 2e-6 or 0.000002. Is there a reason for this?

cianohagan avatar May 08 '18 14:05 cianohagan

hello, but the regularization loss is not include in this code ?

XiaotianM avatar Jul 02 '18 06:07 XiaotianM

Excuse me, I was confused by the weights of the losses ,have you ever tried other weights for the losses and how much influence can the weights bring to the results? Thank you in advance.

sdlpkxd avatar Aug 01 '18 09:08 sdlpkxd

Sorry for bothering. Why the VGG feature maps have to be recaled by 1/12.75? I am confused by this.

yuyuziliu avatar Aug 24 '19 03:08 yuyuziliu

@yuyuziliu the VGG is pre-trained using image with scale of 0 ~ 1

zsdonghao avatar Aug 24 '19 06:08 zsdonghao