DEVIANT
DEVIANT copied to clipboard
nan loss after 5 epochs on custom dataset
Hi,
Thanks for sharing your work.
I was training on a custom dataset.
The losses after 6 epochs are nan
. Tried reducing the learning rate but that didnt help either. Wondering if @abhi1kumar you encountered this issue while training.
INFO ------ TRAIN EPOCH 006 ------
INFO Learning Rate: 0.001250
INFO Weights: depth_:nan, heading_:nan, offset2d_:1.0000, offset3d_:nan, seg_:1.0000, size2d_:1.0000, size3d_:nan,
INFO BATCH[0020/3150] depth_loss:nan, heading_loss:nan, offset2d_loss:nan, offset3d_loss:nan, seg_loss:nan, size2d_loss:nan, size3d_loss:nan,
INFO BATCH[0040/3150] depth_loss:nan, heading_loss:nan, offset2d_loss:nan, offset3d_loss:nan, seg_loss:nan, size2d_loss:nan, size3d_loss:nan,
INFO BATCH[0060/3150] depth_loss:nan, heading_loss:nan, offset2d_loss:nan, offset3d_loss:nan, seg_loss:nan, size2d_loss:nan, size3d_loss:nan,
INFO BATCH[0080/3150] depth_loss:nan, heading_loss:nan, offset2d_loss:nan, offset3d_loss:nan, seg_loss:nan, size2d_loss:nan, size3d_loss:nan,
INFO BATCH[0100/3150] depth_loss:nan, heading_loss:nan, offset2d_loss:nan, offset3d_loss:nan, seg_loss:nan, size2d_loss:nan, size3d_loss:nan,
INFO BATCH[0120/3150] depth_loss:nan, heading_loss:nan, offset2d_loss:nan, offset3d_loss:nan, seg_loss:nan, size2d_loss:nan, size3d_loss:nan,
Before epoch 6 losses are reducing as expected.
Hi @makaveli10 Thank you for showing interest in our work. Here are a few stuff which I would try:
- Please try training on KITTI dataset first and see if you reencounter this error.
- Next, check if the input label files of the dataset are in the KITTI format. I suspect these are not since the weights for the 3D labels seem to be wrong. It would help if you also tried visualizing labels on top of the images
- Finally, check if the data resolution feed to the model is correct.
- Try changing the seed (May be this is 1 in a million chance)
Thanks for your quick response @abhi1kumar.
- I have used the custom dataset with mmdetection3d and gives expected results.
- So, as for the resolution I am using the resolution as in the configurations files but my images are (1224, 370) which is the image size in kitti config.
- Also, i tried visualizing the labels on top of images which seems exactly correct. I can share those with you if you want.
- Other than that reducing the learning rate seems to have no effect on this issue. Thanks
This is pretty strange.
I have used the custom dataset with mmdetection3d and gives expected results.
Our DEVIANT codebase is essentially a fork of GUPNet codebase, which is not mature compared to mmdetection3d.
Did you try with KITTI? KITTI is small to download and should be easy to run.
So, as for the resolution I am using the resolution as in the configurations files but my images are (1224, 370) which is the image size in kitti config.
It is fine.
Also, i tried visualizing the labels on top of images which seems exactly correct. I can share those with you if you want.
I hope you checked to plot with our plot/plot_qualitative_output.py
with --dataset kitti --show_gt_in_image
option.
You have to change paths on these lines for your dataset.
Other than that reducing the learning rate seems to have no effect on this issue.
Your 3D dimensions are exploding as well, which is alarming to me. Could you try switching off the depth and projected center part in the loss and see if you still encounter NaNs?
Also, do your labels contain the three classes, or does it contain more classes? The DEVIANT dataloader for the KITTI supports three classes with the following dimensions. See here
I met the same issue when training KITTI...
I met the same issue when training KITTI...
- Since I am not able to reproduce your issue on our servers, could you paste the training log here.
- Also are you able to reproduce our Val 1 numbers by running inference on the KITTI Val 1 model?
BTW, I only modified the training and validation data in the folder of ImageSets.
I can successfully run the inference code.
I can successfully run the inference code.
That is great.
BTW, I only modified the training and validation data in the folder of ImageSets.
I also see that you use a bigger batch size. I do not think switching to a different KITTI data split should be an issue. However, our DEVIANT codebase is essentially a fork of GUPNet codebase, which is not robust. My best guess is that there is a bug in the GUP Net code or may be the seed is a problem.
Please try re-running the experiment or switching to a different seed.
Thanks much for your reply.
Thanks much for your reply.
Did your problem get solved? In other words, are you able to train your model on a different KITTI split?
Thanks much for your reply.
Did your problem get solved? In other words, are you able to train your model on a different KITTI split?
I am so sorry I am out of the Lab these days. I will give you feedback ASAP.
This is pretty strange.
I have used the custom dataset with mmdetection3d and gives expected results.
Our DEVIANT codebase is essentially a fork of GUPNet codebase, which is not mature compared to mmdetection3d.
Did you try with KITTI? KITTI is small to download and should be easy to run.
So, as for the resolution I am using the resolution as in the configurations files but my images are (1224, 370) which is the image size in kitti config.
It is fine.
Also, i tried visualizing the labels on top of images which seems exactly correct. I can share those with you if you want.
I hope you checked to plot with our
plot/plot_qualitative_output.py
with--dataset kitti --show_gt_in_image
option. You have to change paths on these lines for your dataset.Other than that reducing the learning rate seems to have no effect on this issue.
Your 3D dimensions are exploding as well, which is alarming to me. Could you try switching off the depth and projected center part in the loss and see if you still encounter NaNs?
Also, do your labels contain the three classes, or does it contain more classes? The DEVIANT dataloader for the KITTI supports three classes with the following dimensions. See here
@abhi1kumar Sorry for late response. I still have to test your suggestions. I'll get back to you, thanks alot.
Hi, yesterday I trained the model and it works now. The reason is batch_size needs to be divisible by the number of training sets, otherwise it will calculate loss on a single data in the last batch.
Hi, yesterday I trained the model and it works now. The reason is batch_size needs to be divisible by the number of training sets, otherwise it will calculate loss on a single data in the last batch.
Great to know. Could you let us know what is the batch size and the number of training sets? What values did you use for the batch size and the number of training sets.
Hi, yesterday I trained the model and it works now. The reason is batch_size needs to be divisible by the number of training sets, otherwise it will calculate loss on a single data in the last batch.
Great to know. Could you let us know what is the batch size and the number of training sets? What values did you use for the batch size and the number of training sets. My case has 5985 training images in Kitti, so the batch size is set as 15.
Hi,I have the same problem as you. How do you solve it
Hi, I set batch size = 1, but the issue of nan loss is still after ~5 epochs. any solution here?
Hi,I have the same problem as you. How do you solve it
Hi, I set batch size = 1, but the issue of nan loss is still after ~5 epochs. any solution here?
@15171452351 @zhaowei0315 The NaN issue happens because of the empty images in the training set. Please remove the empty images (images which do not have any objects inside) from the training set and then train the model.
Please see here for more details.
@15171452351 @zhaowei0315 The GUPNet codebase does not compute 2D and 3D losses when there are empty images (no foreground objects) in a batch. We fix this bug with this commit. With this commit, you no longer need to remove empty images from your training set.