autoencoding_beyond_pixels icon indicating copy to clipboard operation
autoencoding_beyond_pixels copied to clipboard

Monitor train/val losses and hyper param optimisation

Open sahasuman opened this issue 8 years ago • 0 comments

Hi,

How to monitor whether the training is going well? I am a newbie in VAE+GAN training. So far, I worked with CNNs training and usually the training loss gradually deceases. For VAE+GAN, how do you monitor the training and validation losses? Do you consider the training loss as the combined loss coming from the 3 terms of the loss function (Eq. 8 in the paper)? The training loss increases initially, is it usual with VAE+GAN? How many epoch is required to converge the model?

What approach do you take to optimize the following hyper-parameters: recon_vs_gan_weight, real_vs_gen_weight, self.equilibrium, and self.margin (in model/aegan.py) parameters? Could you please give some hints on weighing the losses (3 different loss terms) carefully to make it converge on my dataset.

Thanks!

sahasuman avatar May 19 '17 10:05 sahasuman