All-Convnet-Autoencoder-Example
All-Convnet-Autoencoder-Example copied to clipboard
deconvolving and autoencoders
im looking at the so called "deconvolution" from http://www.matthewzeiler.com/pubs/arxive2013/eccv2014.pdf and looking at the code you have for autoencoders, which convolvels, pools and then reverses the process, what is the difference between this autoencoder and the Zeiler technique?
One thing to note is that my pooling is nothing more then 1 by 1 conv layers so it is all an all conv network (no max pool or anything). To answer your question though, conv transpose is actually the same thing as deconvolution from Zeilers papers. There is a nice discussion I found explaining why they call it conv transpose in tensorflow https://github.com/tensorflow/tensorflow/issues/256. Hope this helps!