srgan
srgan copied to clipboard
Variations of Discriminative Loss and Generative Adversarial Loss
Hi, I have two issues,
- What is the general trend of the discriminative loss and generative adversarial loss in the super resolution.
- Is there any correlation specific to these two losses? because both of them use the same loss function (binary cross entropy), but as the arguments passed to these are different, there could be some correlating factor between them.
While training my model, both of these functions return the same value, does this happen generally ? and if not, please suggest a solution.
def gan_loss_disc(self, out_disc_fake, out_disc_label): prob_fake = out_disc_fake prob_label = out_disc_label
fake_label = self._get_label_var(prob_fake, is_real=False)
loss_fake = F.binary_cross_entropy(prob_fake, fake_label)
real_label = self._get_label_var(prob_label, is_real=True)
loss_real = F.binary_cross_entropy(prob_label, real_label)
#print(loss_fake , loss_real)
return loss_fake + loss_real
def gan_loss_gen(self, out_disc_fake): prob_fake = out_disc_fake real_label = self._get_label_var(prob_fake, is_real=True) loss_fake = F.binary_cross_entropy(prob_fake, real_label) #print(loss_fake) return loss_fake