AutoPortraitMatting
AutoPortraitMatting copied to clipboard
Tensorflow training stops in iter 6147
Tensorflow training always stops When training iterations reaches 6147. It seems the loss stills very high, So what should I do to keep model keeps training until convergence?
I think the loss it reports is just the loss over one batch data set, not the whole data set. So it seems to be high. Computing loss over whole training data may be too costly. The author just trains the model once over whole training set. I suggest to randomly put part of the training data into a set to compute loss?
@csf0429 I am facing the same problem. While running FCN.py the training stops at 6100 step. Did you solve it?
@engrchrishenry The author just trains the model once over whole training set. The batch data is provided by training list. You can change the condition in the training loop and set epochs to keep its training.