Box regression deltas become infinite or NaN
`[05/05 09:41:28 d2.engine.train_loop]: Starting training from iteration 0
[05/05 09:41:47 d2.utils.events]: eta: 4:25:15 iter: 19 total_loss: 1.521 loss_cls: 0.634 loss_box_reg: 0.112 loss_rpn_cls: 0.673 loss_rpn_loc: 0.145 time: 0.9152 data_time: 0.0076 lr: 0.000954 max_mem: 4685M size_of_ImageStore: N/A
[05/05 09:42:05 d2.utils.events]: eta: 4:28:50 iter: 39 total_loss: 0.875 loss_cls: 0.216 loss_box_reg: 0.119 loss_rpn_cls: 0.398 loss_rpn_loc: 0.139 time: 0.9152 data_time: 0.0133 lr: 0.001953 max_mem: 4685M size_of_ImageStore: N/A
[05/05 09:42:24 d2.utils.events]: eta: 4:30:12 iter: 59 total_loss: 0.744 loss_cls: 0.173 loss_box_reg: 0.110 loss_rpn_cls: 0.278 loss_rpn_loc: 0.170 time: 0.9189 data_time: 0.0046 lr: 0.002952 max_mem: 4685M size_of_ImageStore: N/A
[05/05 09:42:37 d2.engine.hooks]: Overall training speed: 72 iterations in 0:01:06 (0.9225 s / it)
[05/05 09:42:37 d2.engine.hooks]: Total training time: 0:01:07 (0:00:00 on hooks)
Traceback (most recent call last):
File "tools/train_net.py", line 162, in
-- Process 3 terminated with the following error: Traceback (most recent call last): File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap fn(i, *args) File "/data/yu/code/iOD/detectron2/engine/launch.py", line 84, in _distributed_worker main_func(*args) File "/data/yu/code/iOD/tools/train_net.py", line 150, in main return trainer.train() File "/data/yu/code/iOD/detectron2/engine/defaults.py", line 406, in train super().train(self.start_iter, self.max_iter) File "/data/yu/code/iOD/detectron2/engine/train_loop.py", line 152, in train self.run_step() File "/data/yu/code/iOD/detectron2/engine/train_loop.py", line 281, in run_step loss_dict = self.model(data) File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/parallel/distributed.py", line 447, in forward output = self.module(*inputs[0], **kwargs[0]) File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/data/yu/code/iOD/detectron2/modeling/meta_arch/rcnn.py", line 179, in forward proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) File "/root/anaconda3/envs/iOD/lib/python3.7/site-packages/torch/nn/modules/module.py", line 532, in call result = self.forward(*input, **kwargs) File "/data/yu/code/iOD/detectron2/modeling/proposal_generator/rpn.py", line 201, in forward outputs.predict_proposals(), File "/data/yu/code/iOD/detectron2/modeling/proposal_generator/rpn_outputs.py", line 422, in predict_proposals pred_anchor_deltas_i, anchors_i.tensor File "/data/yu/code/iOD/detectron2/modeling/box_regression.py", line 79, in apply_deltas assert torch.isfinite(deltas).all().item(), "Box regression deltas become infinite or NaN!" AssertionError: Box regression deltas become infinite or NaN!`
I don't change any code,why happen this?
How many GPUs are you using? I wonder whether the inconsistency in training is due to that. Kindly reopen if the issue persists.