Autoencoders-Variants
Autoencoders-Variants copied to clipboard
Why does sparse encoding using KL of two different labels have same 0 neurons?
Hi @syorami , All,
I hope to understand sparse encoding conceptually better hopefully with an answer to this question. Why do two different classes have the same neurons contributing to sparsity (the neurons with 0 you see below; i.e. in positions 1,8,21,25,26,27)? I would assume the goal of sparsity would be, by pushing the sparsity constraint, only neurons most sensitive to a specific class would show high activation, while the rest would be zeroed out; Thus for different classes different neurons should be pushed to 0, and not the same ones. If that is not the case, then what is the benefit of applying sparsity constraint on the neurons?
For class 5 of mnist:
[ 0.0000, 11.8318, 16.0555, 3.6289, 15.2786, 7.6066, 7.1188, 0.0000,
8.3991, 13.2328, 7.3709, 0.0000, 13.2765, 14.9766, 13.9301, 12.1690,
8.8316, 10.5639, 15.9207, 11.9792, 0.0000, 7.9138, 6.3603, 8.8305,
0.0000, 0.0000, 0.0000, 10.5949, 11.8915, 16.3092, 10.0973, 9.6339]]
For class 1 of mnist:
[ 0.0000, 13.2143, 1.6391, 10.2586, 3.8103, 6.0026, 3.5844, 0.0000,
3.9776, 9.9163, 2.1478, 0.0000, 8.2941, 3.7168, 1.9532, 3.7498,
3.7483, 5.6161, 9.1870, 5.3832, 0.0000, 5.4140, 0.8535, 6.2867,
0.0000, 0.0000, 0.0000, 5.9366, 5.6120, 6.0602, 9.4863, 8.5068]]