pytorch-deeplab-xception icon indicating copy to clipboard operation
pytorch-deeplab-xception copied to clipboard

A problem about loss and lr value?

Open Peterisfar opened this issue 5 years ago • 1 comments

In your code, i see in the loss.py the final loss value is the value after excluding twice the batchsize . And in your train.py,final lr multiply the batchsize. so i think it's the reason. i don't know if i'm right.

By the way, i update the loss.py and train.py by remove the batchsize_average in the loss.py, and remove the batchsize multiplier coefficient in the train.py, the result mIoU is same.

Peterisfar avatar Dec 22 '19 08:12 Peterisfar

I have the same comfusion, why the code use size_average and batch_average at the same time, it seems divide the batch_size twice?

GondorFu avatar Feb 07 '21 07:02 GondorFu