pytorch-deeplab-xception icon indicating copy to clipboard operation
pytorch-deeplab-xception copied to clipboard

validation loss always lower than train loss

Open ycc66104116 opened this issue 3 years ago • 0 comments

hi, i recently use the code to train my own dataset. and i found something strange, that is on tensorboard, my val loss is always lower then train loss. however the train acc is higher than val acc. the important thing is i don't know why my val loss is always lower than train loss. the gap between the two curves maintain the same, like the image below. i can't make the two converge. image

i know there are something different in model.train() and model.eval(), but i expect the curves can converge as the epoch increase. does anyone encountered this question before? would this happened when training dataset is too small?

ycc66104116 avatar May 30 '22 15:05 ycc66104116