tensorflow-fcwta
tensorflow-fcwta copied to clipboard
Fully-connected winner-take-all autoencoder implemented in TensorFlow.
tensorflow-fcwta
TensorFlow implementation of a fully-connected winner-take-all (FC-WTA) autoencoder, as described in "Winner-Take-All Autoencoders" (2015) by Alireza Makhzani and Brendan Frey at the University of Toronto.
See train_digits.py and train_mnist.py for example code.
Example images
The following images are created by train_mnist.py, which trains a FC-WTA autoencoder on the MNIST digits dataset with 5% sparsity and 2000 hidden units.
This plot compares the original images (top row) to the autoencoder's reconstructions (bottom row):

This one shows the autoencoder's learned code dictionary:

Finally, here are t-SNE plots of the original data (left) and the featurized data (right):

A linear SVM trained on the featurized data achieves a ~98.6% classification accuracy, which is close to the 98.8% accuracy reported in the original paper by Makhzani and Frey.