Yet-Another-EfficientDet-Pytorch
Yet-Another-EfficientDet-Pytorch copied to clipboard
Loss not decreasing
I have a 40000 images custom dataset consisting 19 classes. I have trained the model d2 from pre-trained. I let it run 10 days and the loss is not decreasing even at epoch 250 and stuck at 4.something from day 1. I should mention that I have froze the backbone.
the code: python train.py -c 2 -p efficientdet --batch_size 8 --lr 1e-3 --load_weights last --head_only True
I have tried not to freeze the backbone but I got memory limit error.
Any suggestion?
you should try d0, which is good enough for most tasks. and you should validate on a smaller dataset, like the shape dataset that I provided on releases.
I have tried d0 but after the same problem (loss), I thought maybe the problem was about the image size. Therefore I have selected bigger network which was d2. about the size of dataset I have created 2000 images per class with augmentation for better training. That is why the size of dataset is huge.
so can you get a reasonable mAP on shape dataset? If you can, then probably it has something to do with your dataset's annotation
Yes there is no complaint about shape dataset. So do you think I will get the same result with other efficientdet implementation with this dataset?
possibly, if your annotation is correct. you should visualize it and make sure the category id starts from 1 like coco does.
great, can you visualize it on your images?
How can I do that with train set? By the way, I have trained this model with ubonto
try https://www.robots.ox.ac.uk/~vgg/software/via/via.html
I cannot import my annotations in via. But it is correct. Just for double checking this is my json format:
which could be different in terms of style with other jsons. does it look appropriate for feeding this efficientdet?
via does have troubles loading non-standard coco annotations