ganhacks
ganhacks copied to clipboard
Need help in understanding #1 and #16
Hi, I have question regarding some points in there
- regarding point 1, is there a reason behind normalizing the input into (-1,1) scale? Is it because the usage of tanh as generator output?
- regarding point 16, I see lots of GANs still use method similar to concatenation eg. CGAN use pure concat while StarGAN expands it further to depth-concat. What makes using Embedding layer recommended that other conditioning methods?
On the NIPS 2016, while discussing trick#16, Soumith mentions the name of Michel Matthew. I am looking for what he referred to. Can anyone help?
On the NIPS 2016, while discussing trick#16, Soumith mentions the name of Michel Matthew. I am looking for what he referred to. Can anyone help?
That's Michael Mathieu, from Deepmind