PapersAnalysis icon indicating copy to clipboard operation
PapersAnalysis copied to clipboard

Paper - Variational Autoencoder with Learned Latent Structure

Open NicolaBernini opened this issue 4 years ago • 0 comments

Overview

Variational Autoencoder with Learned Latent Structure

Arxiv: https://arxiv.org/abs/2006.10597

image

The manifold hypothesis states that high-dimensional data can be modeled as lying on or near a low-dimensional, nonlinear manifold.

  • Manifold Hypothesis: data points in the dataset stay on a specific manifold subspace of the input space

Variational Autoencoders (VAEs) approximate this manifold by learning mappings from low-dimensional latent vectors to high-dimensional data while encouraging a global structure in the latent space through the use of a specified prior distribution.

  • VAE tries to learn this manifold by building an invertible mapping between this subspace and the input space
  • The latent space is also shaped a prior PDF

When this prior does not match the structure of the true data manifold, it can lead to a less accurate model of the data.

  • Observed for example in Disentangled Representation Learning with VAE: using Factorizing Prior introduces a trade-off between reconstruction quality and disentanglement

To resolve this mismatch, we introduce the Variational Autoencoder with Learned Latent Structure (VAELLS) which incorporates a learnable manifold model into the latent space of a VAE.

  • Learning the prior

This enables us to learn the nonlinear manifold structure from the data and use that structure to define a prior in the latent space. The integration of a latent manifold model not only ensures that our prior is well-matched to the data, but also allows us to define generative transformation paths in the latent space and describe class manifolds by transformations stemming from examples of each class. We validate our model on examples with known latent structure and also demonstrate its capabilities on a real-world dataset.

NicolaBernini avatar Jun 22 '20 06:06 NicolaBernini