BGAN
BGAN copied to clipboard
Discrete Celeba Experiment
Hi, I'm trying to do a porting from Theano to Tensorflow of this library. Is there a way to know the parameters used for the Celeba experiment?
And second, why using the disconnected_grad after the multinomial distribution in the multi_loss?
Thank you and goodbye
It's been a while since I've looked at this code, but the disconnected_grad is a trick to specify a loss such that their gradient looks like the importance sampled version.
Ok thanks, anyway we have successfully made a working version for tensorflow
hey, i am also interested in an example for discretised celeba, any way you could make the tensor flow version available @PierfrancescoArdino ?