bigan
bigan copied to clipboard
Query regarding objective function
We are trying to get a latent representation of real data by passing it through some encoder. After passing the real data (x) through the encoder, we pass both the latent representation of x as well as x to the discriminator for classifying the example x as real or fake. Now, the output of the discriminator can be thought of as the probability of an example being real. So the parameters of the encoder should be updated in such a way that it maximizes the the discriminator output and in this way the encoder will be able to learn a good latent representaion of the real data. But in the paper it is said that we need to minimize discriminator output w.r.t. the parameters of encoder. Why is it so?
Please mercy me if my understanding is wrong.
I am confusing on the same.
Did you find out why the paper use mininum to Encoder loss?
Thanks.
I just got a hint and figure out why.
the key object for encoder is to learn a distribution from real data x to noise, which is opposite to generator's