fauxtograph icon indicating copy to clipboard operation
fauxtograph copied to clipboard

image loss function?

Open dribnet opened this issue 8 years ago • 8 comments

VAE and VAEGAN code is currently using mean squared error as the reconstruction loss function. In most papers / implementations, I'm more used to seeing binary cross entropy with numbers reported in nats.

Curious what we think would be best here. I did do a quick look for this in the chainer docs but didn't see binary cross entropy listed as one of the built in loss functions.

dribnet avatar Jan 25 '16 07:01 dribnet

Any chance you might point me to sources? I have seen BCE used to more accurately reflect the distribution of the data when binary (for instance if training on the MNIST set), but I am not sure I see the benefit to using it for continuous pixel values as in most images. I am definitely willing to change this though if there is compelling evidence that it would be a good idea, so please post the papers/implementations and I will take a look.

tjtorres avatar Jan 25 '16 08:01 tjtorres

I'm most familiar with DRAW, which says (section 4):

cross_entropy

Will try to track down something more recent to see if this is best practice more broadly.

dribnet avatar Jan 25 '16 09:01 dribnet

There's a sigmoid cross entropy available, which might be of use here.

cemoody avatar Jan 25 '16 21:01 cemoody

This is from Kingma and Welling (2013):

We let pθ(x|z) be a multivariate Gaussian (in case of real-valued data) or Bernoulli (in case of binary data) whose distribution parameters are computed from z with a MLP (a fully-connected neural network with a single hidden layer, see appendix C).

c

Here is a more recent paper in which a similar formulation is used.

Chainer has gaussian_nll and bernoulli_nll loss functions for VAE.

umguec avatar Jan 29 '16 01:01 umguec

It definitely makes sense to add the Bernoulli negative log likelihood if one wishes to look at Bernoulli distributed posterior data distributions as in say MNIST, though I hadn't envisioned that being a big use case initially. However, after recently trying to use the package to train over a font dataset, and realizing performance was somewhat hindered if I didn't artificially induce continuity with a slight Gaussian filtering, I think it's probably a good idea to include this as a loss option. The gaussian NLL is quite similar to MSE assuming unit covariance, but they do differ somewhat and I'd be willing to adopt that as additional option too, since implementing both is rather easy (as you point out they both already exist in Chainer). I will assign myself to this unless there are volunteers.

tjtorres avatar Feb 01 '16 08:02 tjtorres

I'm hoping to use binarized MNIST (with validation data) as a sanity check to compare the NLL test score fauxtograph can achieve against other generative implementations.

dribnet avatar Feb 01 '16 10:02 dribnet

Sounds great! Should be quite fast to validate over MNIST, though I think the MNIST set will be too small to use with the convolution architecture currently available. MNIST images are 28x28 and fauxtograph supports 32x32 at the smallest. A simple workaround would be to preprocess the set and add in a 2 pixel black border to all sides. I have also been thinking of adding a conditional semi-supervised option or an adversarial auto encoder class at some point as well. Would be good to benchmark all.

tjtorres avatar Feb 02 '16 01:02 tjtorres

I've tried both BCELoss and MSELoss for CIFAR10 dataset reconstructions using Autoencoder. MSELoss is giving better looking reconstructed images than BCELoss.

abhinav3 avatar Sep 05 '19 11:09 abhinav3