SRGAN
SRGAN copied to clipboard
A question about the weight of the loss
g_gan_loss = 1e-3 * tl.cost.sigmoid_cross_entropy(logits_fake, tf.ones_like(logits_fake), name='g')
mse_loss = tl.cost.mean_squared_error(net_g.outputs , t_target_image, is_mean=True)
vgg_loss = 2e-6 * tl.cost.mean_squared_error(vgg_predict_emb.outputs, vgg_target_emb.outputs,
is_mean=True)
Could you pls tell me the reason that why you set 1e-3 and 2e-6 as the weight of the loss? Thanks.
it is for balancing, you can find the answer on the paper.
g_loss = mse_loss + vgg_loss + g_gan_loss Sorry for bothering.I did not mention the author combine the mse_loss with vgg_loss,it will have a better performance?
yes, it will have better performance
The paper states that the VGG feature maps were rescaled by 1/12.75 which is equivalent to multiplying the VGG loss by approx 0.006. The value in the code is 2e-6 or 0.000002. Is there a reason for this?
hello, but the regularization loss is not include in this code ?
Excuse me, I was confused by the weights of the losses ,have you ever tried other weights for the losses and how much influence can the weights bring to the results? Thank you in advance.
Sorry for bothering. Why the VGG feature maps have to be recaled by 1/12.75? I am confused by this.
@yuyuziliu the VGG is pre-trained using image with scale of 0 ~ 1