GDL_code
GDL_code copied to clipboard
magnitude of loss values when training the variational autoencoder
Hey there - thanks for writing such a great book, and releasing the code! I'm looking forward to your next book on reinforcement learning too!
I trained your variational autoencoder (vae) and was a bit surprised at the magnitude of the losses when running the vae, which was much higher than the standard autoencoder. Could you please post your loss data, so I could compare it to mine.
This is from my run:
Train on 48000 samples, validate on 12000 samples
Epoch 1/500
32/48000 [..............................] - ETA: 50:31 - loss: 231.1299 - vae_r_loss: 231.1293 - vae_kl_loss: 5.8350e-04
WARNING: Logging before flag parsing goes to stderr.
W0902 23:57:59.011679 139854072612672 callbacks.py:243] Method (on_train_batch_end) is slow compared to the batch update (0.172191). Check your callbacks.
47840/48000 [============================>.] - ETA: 0s - loss: 59.9847 - vae_r_loss: 56.8435 - vae_kl_loss: 3.1413
Epoch 00001: val_loss improved from inf to 53.52105, saving model to /models/VariationalAutoencoder/model_checkpoint.h5
48000/48000 [==============================] - 17s 364us/sample - loss: 59.9680 - vae_r_loss: 56.8250 - vae_kl_loss: 3.1430 - val_loss: 53.5210 - val_vae_r_loss: 49.5327 - val_vae_kl_loss: 3.9883
Epoch 2/500
47872/48000 [============================>.] - ETA: 0s - loss: 52.4285 - vae_r_loss: 48.5862 - vae_kl_loss: 3.8424
Epoch 00002: val_loss improved from 53.52105 to 50.94505, saving model to /models/VariationalAutoencoder/model_checkpoint.h5
48000/48000 [==============================] - 15s 321us/sample - loss: 52.4225 - vae_r_loss: 48.5799 - vae_kl_loss: 3.8426 - val_loss: 50.9451 - val_vae_r_loss: 47.0550 - val_vae_kl_loss: 3.8900
Epoch 3/500
47936/48000 [============================>.] - ETA: 0s - loss: 50.9089 - vae_r_loss: 46.7227 - vae_kl_loss: 4.1862
Epoch 00003: val_loss improved from 50.94505 to 49.68459, saving model to /models/VariationalAutoencoder/model_checkpoint.h5
48000/48000 [==============================] - 15s 310us/sample - loss: 50.9075 - vae_r_loss: 46.7216 - vae_kl_loss: 4.1860 - val_loss: 49.6846 - val_vae_r_loss: 45.5978 - val_vae_kl_loss: 4.0868
Epoch 4/500
47872/48000 [============================>.] - ETA: 0s - loss: 49.8313 - vae_r_loss: 45.4165 - vae_kl_loss: 4.4148
Epoch 00004: val_loss improved from 49.68459 to 48.86022, saving model to /models/VariationalAutoencoder/model_checkpoint.h5
48000/48000 [==============================] - 15s 317us/sample - loss: 49.8250 - vae_r_loss: 45.4101 - vae_kl_loss: 4.4149 - val_loss: 48.8602 - val_vae_r_loss: 44.5305 - val_vae_kl_loss: 4.3297
Epoch 5/500
47936/48000 [============================>.] - ETA: 0s - loss: 49.1510 - vae_r_loss: 44.6246 - vae_kl_loss: 4.5264
Epoch 00005: val_loss improved from 48.86022 to 48.15245, saving model to /models/VariationalAutoencoder/model_checkpoint.h5
48000/48000 [==============================] - 15s 322us/sample - loss: 49.1478 - vae_r_loss: 44.6210 - vae_kl_loss: 4.5268 - val_loss: 48.1525 - val_vae_r_loss: 43.4562 - val_vae_kl_loss: 4.6962
Epoch 6/500
47872/48000 [============================>.] - ETA: 0s - loss: 48.6135 - vae_r_loss: 44.0106 - vae_kl_loss: 4.6029
Epoch 00006: val_loss improved from 48.15245 to 47.97484, saving model to /models/VariationalAutoencoder/model_checkpoint.h5
48000/48000 [==============================] - 15s 317us/sample - loss: 48.6163 - vae_r_loss: 44.0133 - vae_kl_loss: 4.6030 - val_loss: 47.9748 - val_vae_r_loss: 43.2041 - val_vae_kl_loss: 4.7707
Epoch 7/500
I was surprised to see values like 44, etc, and the kl loss seems to be increasing...
Thanks!