Adversarial loss
Hi, I would like to thank you for uploading the code. I think I asked you about another issue but I would like to ask 1 more question when I have time to come back to cGAN. About the Adversarial loss, I read couple of papers about related GAN and they suggest to minimize the adversarial loss which is -log D(G(Input)) (https://arxiv.org/pdf/1609.04802.pdf) which is the same with maximizing log D(G(Input)) (D(G(Input) is the probability of D telling the output is a counterfeit or not) In your implementation:
L_adv = objectives.binary_crossentropy(y_true_flat, y_pred_flat)
# A to B loss
b_flat = K.batch_flatten(b)
bp_flat = K.batch_flatten(bp)
if is_b_binary:
L_atob = objectives.binary_crossentropy(b_flat, bp_flat)
else:
L_atob = K.mean(K.abs(b_flat - bp_flat))
return L_adv + alpha * L_atob
Is that supposed to be -L_adv + alpha * L_atob? I believe that I am misunderstanding here.
Bests,
Hi,
Notice that, in train.py's pix2pix_generator function I am feeding the real class to the model. We want the discriminator to output the real class and that is why we are minimizing L_adv. Otherwise, if we were to feed the fake class to to the model as y_true, I guess you would need to do that.
Best regards,