vae-pytorch
vae-pytorch copied to clipboard
AE and VAE Playground in PyTorch
AE and VAE Playground
Disclaimer: VAE coming soon...
Remarks
The last activation of the decoder layer, the loss function, and the normalization scheme used on the training data are crucial for obtaining good reconstructions and preventing exploding negative losses.
- If the data range is
[-1, 1], then atanhactivation with an MSE loss does a good reconstruction job. - If the data range is
[0, 1], then asigmoidactivation with a binary cross entropy loss does a good reconstruction job.
I assume that by smartly picking the activation function's range, we're helping the autoencoder's output more easily match the initial normalization distribution.
Simple fully-connected autoencoder (MSE)
Simple fully-connected autoencoder with tanh (MSE)
Simple fully-connected autoencoder (BCE)
Simple fully-connected autoencoder with tanh and L1 regularization (MSE)
Stacked 6 layer autoencoder (MSE)
Stacked 6 layer autoencoder with tanh (MSE)
Stacked 6 layer autoencoder (BCE)
Convolutional autoencoder with tanh (MSE)
Convolutional autoencoder (BCE)
References
to_img(x)function taken from pytorch-beginner.