autoencoders
autoencoders copied to clipboard
KL Divergence
When I look at your function at ae.py#L138, and compare it to https://stat.duke.edu/courses/Spring09/sta205/lec/hoef.pdf (in the middle of page 2) it seems like there should be different bracketing :
return p * T.log(p) - T.log(p_hat) + (1 - p) * T.log(1 - p) - (1 - p) * T.log(1 - p_hat)
should be
return p * (T.log(p) - T.log(p_hat)) + (1 - p) * (T.log(1 - p) - T.log(1 - p_hat))
Please be gentle if I've missed something...
Yes, I agree with you. Be honest, I found that the files has some bugs, more than only one. Maybe these are caglar's challenge/test to us, haha!
Hi, I'm also running this code. Can you show me the format of your test dataset mnist_all.pickle? Or where can I find it?
Great thanks!