AutoPortraitMatting icon indicating copy to clipboard operation
AutoPortraitMatting copied to clipboard

Tensorflow training stops in iter 6147

Open csf0429 opened this issue 7 years ago • 3 comments

Tensorflow training always stops When training iterations reaches 6147. It seems the loss stills very high, So what should I do to keep model keeps training until convergence?

csf0429 avatar Oct 22 '17 04:10 csf0429

I think the loss it reports is just the loss over one batch data set, not the whole data set. So it seems to be high. Computing loss over whole training data may be too costly. The author just trains the model once over whole training set. I suggest to randomly put part of the training data into a set to compute loss?

GateStainer avatar Oct 24 '17 13:10 GateStainer

@csf0429 I am facing the same problem. While running FCN.py the training stops at 6100 step. Did you solve it? screenshot_1

engrchrishenry avatar Oct 30 '17 05:10 engrchrishenry

@engrchrishenry The author just trains the model once over whole training set. The batch data is provided by training list. You can change the condition in the training loop and set epochs to keep its training.

csf0429 avatar Oct 30 '17 05:10 csf0429