pytorch-deeplab-xception
pytorch-deeplab-xception copied to clipboard
A problem about loss and lr value?
In your code, i see in the loss.py the final loss value is the value after excluding twice the batchsize . And in your train.py,final lr multiply the batchsize. so i think it's the reason. i don't know if i'm right.
By the way, i update the loss.py and train.py by remove the batchsize_average in the loss.py, and remove the batchsize multiplier coefficient in the train.py, the result mIoU is same.
I have the same comfusion, why the code use size_average and batch_average at the same time, it seems divide the batch_size twice?