CycleGAN icon indicating copy to clipboard operation
CycleGAN copied to clipboard

G loss increase during trainning

Open 363325971 opened this issue 5 years ago • 2 comments

[Epoch 19/100] [Batch 1000/1215] [D loss: 0.293148, acc: 50%] [G loss: 9.813908, adv: 0.785160, recon: 0.088106, id: 0.103994] time: 2:27:21.693716 [Epoch 20/100] [Batch 500/1215] [D loss: 0.236037, acc: 63%] [G loss: 10.489177, adv: 0.827715, recon: 0.096563, id: 0.175220] time: 2:31:55.181358 [Epoch 20/100] [Batch 1000/1215] [D loss: 0.170726, acc: 72%] [G loss: 10.403699, adv: 0.820442, recon: 0.099613, id: 0.099185] time: 2:35:07.704370 [Epoch 21/100] [Batch 500/1215] [D loss: 0.155145, acc: 76%] [G loss: 9.991513, adv: 0.813189, recon: 0.085293, id: 0.099245] time: 2:39:58.446000 [Epoch 21/100] [Batch 1000/1215] [D loss: 0.117968, acc: 87%] [G loss: 11.388851, adv: 0.934335, recon: 0.090816, id: 0.119621] time: 2:43:07.832832 [Epoch 22/100] [Batch 500/1215] [D loss: 0.124899, acc: 85%] [G loss: 13.695749, adv: 1.178091, recon: 0.085578, id: 0.108616] time: 2:47:39.818389 [Epoch 22/100] [Batch 1000/1215] [D loss: 0.121017, acc: 85%] [G loss: 13.179180, adv: 1.118578, recon: 0.089717, id: 0.120310] time: 2:50:47.048098 [Epoch 23/100] [Batch 500/1215] [D loss: 0.124811, acc: 83%] [G loss: 18.162169, adv: 1.592002, recon: 0.099232, id: 0.131193] time: 2:55:14.145375 [Epoch 23/100] [Batch 1000/1215] [D loss: 0.214606, acc: 61%] [G loss: 17.539886, adv: 1.552671, recon: 0.091593, id: 0.099486] time: 2:58:23.053180 [Epoch 24/100] [Batch 500/1215] [D loss: 0.107143, acc: 92%] [G loss: 12.565107, adv: 1.052918, recon: 0.093799, id: 0.101797] time: 3:02:49.946445 [Epoch 24/100] [Batch 1000/1215] [D loss: 0.199086, acc: 69%] [G loss: 11.984987, adv: 0.984740, recon: 0.095547, id: 0.109572] time: 3:05:56.283103 [Epoch 25/100] [Batch 500/1215] [D loss: 0.126565, acc: 80%] [G loss: 13.556191, adv: 1.135739, recon: 0.099125, id: 0.116991] time: 3:10:24.057419 [Epoch 25/100] [Batch 1000/1215] [D loss: 0.132241, acc: 78%] [G loss: 14.296340, adv: 1.188811, recon: 0.108903, id: 0.112400] time: 3:13:35.121347 [Epoch 26/100] [Batch 500/1215] [D loss: 0.058607, acc: 96%] [G loss: 16.074234, adv: 1.334735, recon: 0.126156, id: 0.127780] time: 3:18:01.855603

during my training, D loss works fine, but the G loss keep increasing all the time, and the result of generator goes worse than before. Do you have any idea why is this? I use 'mse' for D loss, 'mse' for 'adv' and 'mae' for both recon and id

363325971 avatar Mar 01 '19 07:03 363325971

I guess one reason could be D was trained better than G, therefor since D works better, and G was not trained well, then the G loss goes high.

363325971 avatar Mar 01 '19 08:03 363325971

Yes. In your case, the classification task for D might be too easy. I will try to reduce the learning rate of D.

junyanz avatar Mar 25 '19 03:03 junyanz