tensorflow_stacked_denoising_autoencoder
tensorflow_stacked_denoising_autoencoder copied to clipboard
about sparse reg
self.cost = 0.5 * tf.reduce_sum(tf.pow(tf.subtract(self.reconstruction, self.x), 2.0))+\ self.sparse_reg * self.kl_divergence(self.sparsity_level, self.hidden_encode[-1])
def kl_divergence(self, p, p_hat): return tf.reduce_mean(p * tf.log(tf.clip_by_value(p, 1e-8, tf.reduce_max(p))) - p * tf.log(tf.clip_by_value(p_hat, 1e-8, tf.reduce_max(p_hat))) + (1 - p) * tf.log(tf.clip_by_value(1-p, 1e-8, tf.reduce_max(1-p))) - (1 - p) * tf.log(tf.clip_by_value(1-p_hat, 1e-8, tf.reduce_max(1-p_hat))))
I think hidden_encode[-1] should be averaged along the "batch" axis first and then be used to calculated KL divergence with p. Or each element of hidden_encode[-1] will be closed to p (e.g. 0.1) finally.