vae-pytorch icon indicating copy to clipboard operation
vae-pytorch copied to clipboard

AE and VAE Playground in PyTorch

AE and VAE Playground

Disclaimer: VAE coming soon...

Remarks

The last activation of the decoder layer, the loss function, and the normalization scheme used on the training data are crucial for obtaining good reconstructions and preventing exploding negative losses.

  • If the data range is [-1, 1], then a tanh activation with an MSE loss does a good reconstruction job.
  • If the data range is [0, 1], then a sigmoid activation with a binary cross entropy loss does a good reconstruction job.

I assume that by smartly picking the activation function's range, we're helping the autoencoder's output more easily match the initial normalization distribution.

Simple fully-connected autoencoder (MSE)

Drawing

Simple fully-connected autoencoder with tanh (MSE)

Drawing

Simple fully-connected autoencoder (BCE)

Drawing

Simple fully-connected autoencoder with tanh and L1 regularization (MSE)

Drawing

Stacked 6 layer autoencoder (MSE)

Drawing

Stacked 6 layer autoencoder with tanh (MSE)

Drawing

Stacked 6 layer autoencoder (BCE)

Drawing

Convolutional autoencoder with tanh (MSE)

Drawing

Convolutional autoencoder (BCE)

Drawing

References