HierarchicalTemporalMemory.jl
HierarchicalTemporalMemory.jl copied to clipboard
Image encoder: CNN or sparse autoencoder
Numenta's latest paper Grid Cell Path Integration For Movement-Based Visual Object Recognition implements an encoder for MNIST. In the paper they use a CNN with a k-winner layer. I wonder if a sparse autoencoder could do the same job even better.
Would a sparse autoencoder work with a k-winners activation for the latent space, instead of regularization in the objective?