generative_adversarial_networks_101
generative_adversarial_networks_101 copied to clipboard
why we always use img_real to train generator weights in ccGAN?
in ccGAN, we feed the img_real and img_fake to train the weights of discriminator. After that, we should train the generator weights. I think we should use the [masked_imgs, real] as the input and output pairs to train the generator, but you use the img_real to train that. When the input is img_real, how could generator learning anything to generate a new pic? Could you explain the reason? thanks.
for e in range(epochs + 1):
for i in range(len(X_train) // batch_size):
# Train Discriminator weights
discriminator.trainable = True
# Real samples
img_real = X_train[i*batch_size:(i+1)*batch_size]
real_labels = y_train[i*batch_size:(i+1)*batch_size]
d_loss_real = discriminator.train_on_batch(x=img_real, y=[real, real_labels])
# Fake Samples
masked_imgs = mask_randomly(img_real)
gen_imgs = generator.predict(masked_imgs)
d_loss_fake = discriminator.train_on_batch(x=gen_imgs, y=[fake, fake_labels])
# Discriminator loss
d_loss_batch = 0.5 * (d_loss_real[0] + d_loss_fake[0])
# Train Generator weights
discriminator.trainable = False
d_g_loss_batch = d_g.train_on_batch(x=img_real, y=real) # =============> d_g_loss_batch = d_g.train_on_batch(x=masked_imgs, y=real)