DCGAN-tensorflow icon indicating copy to clipboard operation
DCGAN-tensorflow copied to clipboard

g_loss consistently close to zero while training with my own data

Open ianni67 opened this issue 8 years ago • 4 comments

I understand that this must be a problem with my data but need help in understanding how to fix it. I'm completely stuck at this time.

Below a short slice of the output I get while training:

Epoch: [14] [   0/   4] time: 226.4298, d_loss: 14.95790005, g_loss: 0.00002277
Epoch: [14] [   1/   4] time: 230.2907, d_loss: 14.68263435, g_loss: 0.00022513
Epoch: [14] [   2/   4] time: 234.2050, d_loss: 9.30468655, g_loss: 0.00297309
Epoch: [14] [   3/   4] time: 238.1187, d_loss: 6.54414463, g_loss: 0.24060325
Epoch: [15] [   0/   4] time: 242.0421, d_loss: 14.79878426, g_loss: 0.00003257
Epoch: [15] [   1/   4] time: 245.9673, d_loss: 15.27751350, g_loss: 0.00005784 

While d_loss, slowly and bouncing, decreases, g_loss stays consistently very close to zero (often it is 0.0000). Moreover, at the end I get very noisy (almost only noisy) train_* images. I wonder whether this is an issue with my input, or if the error hides elsewhere in my toolchain.

I should add that, for my purposes, the "B" channel of the RGB input images is always 0 (so, in other words, my images have only two meaningful channels, the third is blanked).

This is my command line: python main.py --dataset=images-256-256-full --input_height=256 --input_width=256 --output_height=256 --output_width=256 --input_fname_pattern="*.png" --is_train --is_crop --c_dim 3 --epoch=1000 --gf_dim=256 --df_dim=256

ianni67 avatar Feb 11 '17 08:02 ianni67

Update... After increasing the number of epochs to 1000, I started seeing higher values for g_loss. Looks like the number of epochs was too small for the kind of data I'm feeding in. Also the output appears slightly more structured, even though, it is still mostly noise.

ianni67 avatar Feb 11 '17 14:02 ianni67

you just put 4 pictures? It's not enough to train a model

zhanary avatar May 31 '17 02:05 zhanary

I have same problem in train Epoch: [ 2/200] [ 0/ 4] time: 101.3380, d_loss: 66.68623352, g_loss: 0.00000000 in train Epoch: [ 2/200] [ 1/ 4] time: 111.8188, d_loss: 76.59110260, g_loss: 0.00000000 in train Epoch: [ 2/200] [ 2/ 4] time: 122.4962, d_loss: 61.84632874, g_loss: 0.00000000 in train Epoch: [ 2/200] [ 3/ 4] time: 133.3623, d_loss: 92.15061951, g_loss: 0.00000000 in train Epoch: [ 3/200] [ 0/ 4] time: 143.7579, d_loss: 77.85248566, g_loss: 0.00000000 in train Epoch: [ 3/200] [ 1/ 4] time: 155.1860, d_loss: 82.39799500, g_loss: 0.00000000 in train Epoch: [ 3/200] [ 2/ 4] time: 166.3799, d_loss: 77.72499084, g_loss: 0.00000000 in train Epoch: [ 3/200] [ 3/ 4] time: 177.2598, d_loss: 77.97490692, g_loss: 0.00000000 in train Epoch: [ 4/200] [ 0/ 4] time: 190.2882, d_loss: 91.62058258, g_loss: 0.00000000 in train Epoch: [ 4/200] [ 1/ 4] time: 201.1383, d_loss: 77.87282562, g_loss: 0.00000000 in train Epoch: [ 4/200] [ 2/ 4] time: 211.9442, d_loss: 62.41357040, g_loss: 0.00000000 in train Epoch: [ 4/200] [ 3/ 4] time: 224.1279, d_loss: 62.39629745, g_loss: 0.00000000 in train Epoch: [ 5/200] [ 0/ 4] time: 235.7465, d_loss: 84.88256836, g_loss: 0.00000000 in train Epoch: [ 5/200] [ 1/ 4] time: 246.7435, d_loss: 81.24859619, g_loss: 0.00000000 in train Epoch: [ 5/200] [ 2/ 4] time: 257.4673, d_loss: 56.56097412, g_loss: 0.00000000 in train Epoch: [ 5/200] [ 3/ 4] time: 268.2158, d_loss: 72.46702576, g_loss: 0.00000000 in train Epoch: [ 6/200] [ 0/ 4] time: 279.1540, d_loss: 95.52645874, g_loss: 0.00000000 in train Epoch: [ 6/200] [ 1/ 4] time: 289.9103, d_loss: 52.89161682, g_loss: 0.00000000 in train Epoch: [ 6/200] [ 2/ 4] time: 300.4298, d_loss: 33.64301682, g_loss: 0.00000002 in train Epoch: [ 6/200] [ 3/ 4] time: 311.3520, d_loss: 62.36664581, g_loss: 0.00000000 in train Epoch: [ 7/200] [ 0/ 4] time: 321.7633, d_loss: 86.98524475, g_loss: 0.00000000 in train Epoch: [ 7/200] [ 1/ 4] time: 332.4142, d_loss: 65.05583954, g_loss: 0.00000000 in train Epoch: [ 7/200] [ 2/ 4] time: 342.8735, d_loss: 50.12738419, g_loss: 0.00000000 in train Epoch: [ 7/200] [ 3/ 4] time: 353.6615, d_loss: 63.79507828, g_loss: 0.00000000 in train Epoch: [ 8/200] [ 0/ 4] time: 364.9562, d_loss: 57.13796997, g_loss: 0.00000000 in train Epoch: [ 8/200] [ 1/ 4] time: 375.4762, d_loss: 62.95877457, g_loss: 0.00000000 in train Epoch: [ 8/200] [ 2/ 4] time: 386.5996, d_loss: 50.77989197, g_loss: 0.00000000 in train Epoch: [ 8/200] [ 3/ 4] time: 397.0660, d_loss: 51.62030029, g_loss: 0.00000002 in train Epoch: [ 9/200] [ 0/ 4] time: 407.7714, d_loss: 56.64958572, g_loss: 0.00000000 in train Epoch: [ 9/200] [ 1/ 4] time: 418.5039, d_loss: 38.02365494, g_loss: 0.00000000 in train Epoch: [ 9/200] [ 2/ 4] time: 429.8555, d_loss: 45.29552078, g_loss: 0.00000000 in train Epoch: [ 9/200] [ 3/ 4] time: 440.9431, d_loss: 44.95145416, g_loss: 0.00000000 in train Epoch: [10/200] [ 0/ 4] time: 451.5838, d_loss: 49.63116074, g_loss: 0.00000000 in train Epoch: [10/200] [ 1/ 4] time: 461.9910, d_loss: 39.51537704, g_loss: 0.00000000 in train Epoch: [10/200] [ 2/ 4] time: 472.3361, d_loss: 61.76707077, g_loss: 0.00000000 in train Epoch: [10/200] [ 3/ 4] time: 482.7479, d_loss: 52.49583817, g_loss: 0.00000000 in train Epoch: [11/200] [ 0/ 4] time: 493.0749, d_loss: 69.66619110, g_loss: 0.00000000 in train Epoch: [11/200] [ 1/ 4] time: 503.6349, d_loss: 57.94055557, g_loss: 0.00000000 in train Epoch: [11/200] [ 2/ 4] time: 515.4273, d_loss: 31.25936508, g_loss: 0.00000000 in train Epoch: [11/200] [ 3/ 4] time: 526.4531, d_loss: 55.65859222, g_loss: 0.00000000 in train Epoch: [12/200] [ 0/ 4] time: 537.4629, d_loss: 51.19610596, g_loss: 0.00000000 in train Epoch: [12/200] [ 1/ 4] time: 548.2230, d_loss: 40.03497314, g_loss: 0.00000000 in train Epoch: [12/200] [ 2/ 4] time: 558.9367, d_loss: 56.13888550, g_loss: 0.00000000 in train Epoch: [12/200] [ 3/ 4] time: 570.7811, d_loss: 60.09444809, g_loss: 0.00000000 in train Epoch: [13/200] [ 0/ 4] time: 581.6425, d_loss: 62.98931885, g_loss: 0.00000000 in train

I have just 20 images... I have given batch size as 5..

What should I do??

SHANKARMB avatar Apr 03 '18 16:04 SHANKARMB

I am using a dataset of around 6000 images still my g_loss is consistently giving zero

vaibhavminde avatar Nov 23 '19 05:11 vaibhavminde