Deep-Learning-Experiments icon indicating copy to clipboard operation
Deep-Learning-Experiments copied to clipboard

dcgan loss

Open eyaler opened this issue 7 years ago • 3 comments

in https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py you compute the generator loss as: a_loss = self.adversarial.train_on_batch(noise, y) but this also trains the discriminator using only the fake samples. shouldn't you freeze the discriminator weights for this part?

eyaler avatar Oct 28 '17 17:10 eyaler

@eyaler exactly my doubt

harshtikuu avatar Jun 16 '18 21:06 harshtikuu

yeah... you can change self.AM.add(self.discriminato() in adversarial_model() to this:

        dc = self.discriminator()
        for layer in dc.layers:  layer.trainable = False
        self.AM.add(dc)

You'll get a warning but the discriminator will be frozen for a_loss = self.adversarial.train_on_batch(noise, y)

I verified the change with this instrumentation code:

            print("before adversarial.train " + str(keras.backend.eval(self.adversarial.layers[1].layers[0].weights[0][0][0][0][0])))
            a_loss = self.adversarial.train_on_batch(noise, y)
            print("after  adversarial.train " + str(keras.backend.eval(self.adversarial.layers[1].layers[0].weights[0][0][0][0][0])))

hmaon avatar Oct 21 '18 00:10 hmaon

in https://github.com/roatienza/Deep-Learning-Experiments/blob/master/Experiments/Tensorflow/GAN/dcgan_mnist.py you compute the generator loss as: a_loss = self.adversarial.train_on_batch(noise, y) but this also trains the discriminator using only the fake samples. shouldn't you freeze the discriminator weights for this part?

you're right. we should freeze discriminator

elk-cloner avatar Apr 09 '19 08:04 elk-cloner