faster-rcnn.pytorch icon indicating copy to clipboard operation
faster-rcnn.pytorch copied to clipboard

Getting Nan loss while training

Open ashutoshIITK opened this issue 6 years ago • 10 comments

I have a dataset containing 846 images but when start training I am getting there are 1692 images. I have the dataset in PASCAL_VOC format. The JPEGImages folder contains 846 images. On training, I am getting loss:nan. Can you please let me know the reason for the same? Preparing training data... done before filtering, there are 1692 images... after filtering, there are 1692 images... 1692 roidb entries Loading pretrained weights from data/pretrained_model/resnet101_caffe.pth [session 1][epoch 1][iter 0] loss: 6.7142, lr: 1.00e-03 fg/bg=(2/126), time cost: 238.602555 rpn_cls: 0.7190, rpn_box: 1.7119, rcnn_cls: 4.2830, rcnn_box 0.0003 [session 1][epoch 1][iter 100] loss: nan, lr: 1.00e-03 fg/bg=(13/115), time cost: 40.301977 rpn_cls: 0.5280, rpn_box: nan, rcnn_cls: 0.7082, rcnn_box 0.0000 [session 1][epoch 1][iter 200] loss: nan, lr: 1.00e-03 fg/bg=(32/96), time cost: 40.584164 rpn_cls: 0.3966, rpn_box: nan, rcnn_cls: 1.0526, rcnn_box 0.0000 [session 1][epoch 1][iter 300] loss: nan, lr: 1.00e-03 fg/bg=(8/120), time cost: 41.294393 rpn_cls: 0.4398, rpn_box: nan, rcnn_cls: 0.6331, rcnn_box 0.0000 [session 1][epoch 1][iter 400] loss: nan, lr: 1.00e-03 fg/bg=(32/96), time cost: 42.057193 rpn_cls: 0.2161, rpn_box: nan, rcnn_cls: 0.9535, rcnn_box 0.0000 [session 1][epoch 1][iter 500] loss: nan, lr: 1.00e-03 fg/bg=(32/96), time cost: 41.014715 rpn_cls: 0.1673, rpn_box: nan, rcnn_cls: 0.9406, rcnn_box 0.0000 [session 1][epoch 1][iter 600] loss: nan, lr: 1.00e-03 fg/bg=(32/96), time cost: 42.453671 rpn_cls: 0.1687, rpn_box: nan, rcnn_cls: 0.9308, rcnn_box 0.0000

ashutoshIITK avatar Apr 22 '18 08:04 ashutoshIITK

There are somthing wrong about your dataset. 1.In the "\lib\dataset\pascal_voc.py", change the" x1 = float(bbox.find('xmin').text) - 1 y1 = float(bbox.find('ymin').text) - 1" to x1 = float(bbox.find('xmin').text) y1 = float(bbox.find('ymin').text) " delete the "-1". 2. then "rm -rf $your data cache$" Maybe the log(-1) lead to this error.

cui-shaowei avatar May 05 '18 13:05 cui-shaowei

@ashutoshIITK do you solve the problem?

super-wcg avatar May 19 '18 05:05 super-wcg

@super-wcg Yes, I solved the problem of getting NaN Loss. It was due to the error in the coordinates. The following things were giving NaN loss 1.Coordinates out of the image resolution------------> NaN Loss 2. xmin=xmax-----------> Results in NaN Loss 3. ymin==ymax-----------> Results in Nan Loss 4. The size of bounding box was very small-----------> Results in NaN Loss

For the 4th case, we put a condition that the difference of |xmax -xmin| >= 20 and similarly |ymax- ymin| >=20

I trained the model (For 20 epochs) after fixing all this and didn't get NaN Loss error.

Thank you.

ashutoshIITK avatar May 21 '18 03:05 ashutoshIITK

@ashutoshIITK My problem is same as yours.I follow above instruction to modify my code.But Nan problem still exists,can you describe your modifications specificly?I hope I can get your help.Thanks.

JingXiaolun avatar Jul 28 '18 08:07 JingXiaolun

@1csu What's the size of your image?

ashutoshIITK avatar Jul 30 '18 11:07 ashutoshIITK

Did anyone find a solution for this? I have done almost everything but couldnt resolve it

Rahul250192 avatar Nov 06 '18 00:11 Rahul250192

@ashutoshIITK Where to put the condition for the 4th case?

I trained my model on my dataset (similar to pascal VOC) with the batch size of 4 and 8 which worked fine. But reducing the batch size to 2 produces the NaN loss. Any idea why this happens?

rnjtsh avatar Nov 11 '18 14:11 rnjtsh

There are two files pascal_voc.py and pascal_voc_rgb.py, in default case you should change file pascal_voc.py rather than pascal_voc_rgb.py, as @swchui said, it works for me.

nico-zck avatar Feb 21 '19 12:02 nico-zck

I also found that it can happen when the learning rate is too high.

EmilioOldenziel avatar Jun 13 '19 11:06 EmilioOldenziel

@ashutoshIITK Where to put the condition for the 4th case?

I trained my model on my dataset (similar to pascal VOC) with the batch size of 4 and 8 which worked fine. But reducing the batch size to 2 produces the NaN loss. Any idea why this happens?

exactly, I have the same problem as yours.

armin-azh avatar Nov 09 '21 06:11 armin-azh