Autoencoders-Variants icon indicating copy to clipboard operation
Autoencoders-Variants copied to clipboard

sparse_autoencoder_l1, does the l1 constrain really make the representation sparse?

Open menglin0320 opened this issue 5 years ago • 4 comments

Did you find any paper or did you do any empirical experiment that proves that simply adding l1 loss to hidden representation encourages sparsity on the hidden representation

menglin0320 avatar Mar 27 '19 17:03 menglin0320

Sorry for a late reply. I didn't read the notifications. I couldn't find the original paper where this kind of analysis is raised but I hope this one (Why Regularized Auto-Encoders Learn Sparse Representation?) can help you.

syorami avatar Apr 17 '19 02:04 syorami

Here is a difference between sparsity on parameter and sparsity on representation. Sparse Autoencoder proposed by Andrew NG is able to learn a sparse representation and it is well known that l1 regularization encourages sparsity on parameters. After I propose the question I looked at your graph again and realize that it's a graph for sparsity on parameters. Then everything makes sense. But Please do not mention this paper on your repo, this paper talks about sparsity on representation. It just confuse people more. And I guess l1 loss doesn't encourage sparsity on representation, but you can try to print out the hidden representation to check if it's true.

menglin0320 avatar Apr 17 '19 13:04 menglin0320

Interesting. I thought sparsity on representation means the same as sparsity on parameters. I'll try to figure it out. And sorry I'm quite busy these days.

syorami avatar Apr 25 '19 10:04 syorami

how about Sparse Autoencoder (KL divergence) here ? is there any paper ?

meshiguge avatar May 02 '19 21:05 meshiguge