TET-GAN
TET-GAN copied to clipboard
The loss normal?
Hello, I finetune with your tetgen-aaai.ckpt model. Can you tell me if the loss is normal ? And what's the value of loss when you train your model?
Thanks a lot!
Here is an example of my fine-tuning loss:
--- load options --- batchsize: 8 datasize: 80 epoch: 1 gpu: 1 load_model_name: ../save/tetgan-aaai.ckpt outer_iter: 20 save_model_name: ../save/tetgan-oneshot.ckpt style_name: ../data/oneshotstyle/3-train.png supervise: 1 --- load parameter --- --- create model --- --- training --- Iter[1/20], Epoch [1/1] Lrec: 0.209, Ldadv: 0.997, Ldesty: -10.377, Lsadv: 8.586, Lsty: 1068.036 Iter[2/20], Epoch [1/1] Lrec: 0.213, Ldadv: 0.881, Ldesty: -15.418, Lsadv: 4.431, Lsty: 1069.656 Iter[3/20], Epoch [1/1] Lrec: 0.222, Ldadv: 0.815, Ldesty: -17.240, Lsadv: 4.615, Lsty: 1062.376 Iter[4/20], Epoch [1/1] Lrec: 0.230, Ldadv: 0.649, Ldesty: -18.795, Lsadv: 3.471, Lsty: 1058.074 Iter[5/20], Epoch [1/1] Lrec: 0.240, Ldadv: 0.542, Ldesty: -19.693, Lsadv: 2.516, Lsty: 1057.255 Iter[6/20], Epoch [1/1] Lrec: 0.249, Ldadv: 0.499, Ldesty: -20.990, Lsadv: 3.907, Lsty: 1056.811 Iter[7/20], Epoch [1/1] Lrec: 0.250, Ldadv: 0.388, Ldesty: -21.648, Lsadv: 3.259, Lsty: 1054.748 Iter[8/20], Epoch [1/1] Lrec: 0.260, Ldadv: 0.317, Ldesty: -22.082, Lsadv: 2.883, Lsty: 1053.869 Iter[9/20], Epoch [1/1] Lrec: 0.272, Ldadv: 0.327, Ldesty: -22.403, Lsadv: 2.937, Lsty: 1053.074 Iter[10/20], Epoch [1/1] Lrec: 0.242, Ldadv: 0.311, Ldesty: -23.262, Lsadv: 3.071, Lsty: 1051.467 Iter[11/20], Epoch [1/1] Lrec: 0.244, Ldadv: 0.215, Ldesty: -23.650, Lsadv: 2.432, Lsty: 1051.484 Iter[12/20], Epoch [1/1] Lrec: 0.251, Ldadv: 0.230, Ldesty: -24.328, Lsadv: 2.854, Lsty: 1052.214 Iter[13/20], Epoch [1/1] Lrec: 0.241, Ldadv: 0.211, Ldesty: -24.727, Lsadv: 2.757, Lsty: 1051.540 Iter[14/20], Epoch [1/1] Lrec: 0.238, Ldadv: 0.184, Ldesty: -25.021, Lsadv: 2.736, Lsty: 1051.355 Iter[15/20], Epoch [1/1] Lrec: 0.234, Ldadv: 0.153, Ldesty: -25.337, Lsadv: 2.677, Lsty: 1050.950 Iter[16/20], Epoch [1/1] Lrec: 0.234, Ldadv: 0.147, Ldesty: -25.462, Lsadv: 2.703, Lsty: 1051.589 Iter[17/20], Epoch [1/1] Lrec: 0.238, Ldadv: 0.132, Ldesty: -25.753, Lsadv: 2.615, Lsty: 1051.917 Iter[18/20], Epoch [1/1] Lrec: 0.234, Ldadv: 0.133, Ldesty: -26.059, Lsadv: 2.763, Lsty: 1051.726 Iter[19/20], Epoch [1/1] Lrec: 0.234, Ldadv: 0.121, Ldesty: -26.300, Lsadv: 2.610, Lsty: 1052.178 Iter[20/20], Epoch [1/1] Lrec: 0.226, Ldadv: 0.109, Ldesty: -26.630, Lsadv: 2.825, Lsty: 1051.560 --- save ---
What are reasons that one or two of my parameters are nan? Lrec: 10.591, Ldadv: 11.214, Ldesty: nan, Lsadv: 19.919, Lsty: 67.168
I don't know. I haven't encountered this problem in TETGAN.
https://github.com/williamyang1991/TET-GAN/blob/bdfca141fc14c5917fd9be8d2bc23870f9ad3288/src/models.py#L402-L405
Maybe you should check which one gives nan, x_feature
, x_fake
or fake_output
This helped, thank you! :) I had a typo in this section and wouldn't have thought to look here