ReDet icon indicating copy to clipboard operation
ReDet copied to clipboard

The model and loaded state dict do not match exactly on dotav1 dataset

Open nishit01 opened this issue 4 years ago • 8 comments

Hi @csuhan, thanks for the code. I ran test.py on pretrained model weights but I am getting below output on dota1. Also, I would like to mention that the test.py shows the below output and then starts running on testing images but not sure where I am making the mistake.

Can you help me with this ... Thanks!

I executed this command: python tools/test.py configs/ReDet/ReDet_re50_refpn_1x_dota1_ms.py models/pretrained_weights/dota1_ms_baseline_weights.pth --out work_dirs/pretrained_results/results.pkl

dota1_ms_baseline_weights.pth I downloaded from here

missing keys in source state_dict: backbone.layer4.0.conv1.filter, neck.lateral_convs.2.conv.filter, backbone.layer2.0.conv2.filter, backbone.layer3.4.conv1.filter, backbone.layer4.0.downsample.0.filter, backbone.layer2.0.downsample.0.filter, backbone.layer2.3.conv1.filter, neck.lateral_convs.3.conv.expanded_bias, backbone.layer2.1.conv2.filter, backbone.layer2.3.conv3.filter, backbone.layer3.2.conv2.filter, neck.lateral_convs.3.conv.filter, backbone.layer3.3.conv3.filter, neck.fpn_convs.3.conv.filter, neck.fpn_convs.2.conv.filter, backbone.layer2.1.conv1.filter, neck.fpn_convs.1.conv.filter, backbone.layer2.1.conv3.filter, neck.fpn_convs.3.conv.expanded_bias, neck.fpn_convs.2.conv.expanded_bias, backbone.conv1.filter, backbone.layer2.0.conv3.filter, backbone.layer4.0.conv2.filter, neck.lateral_convs.1.conv.expanded_bias, neck.fpn_convs.1.conv.expanded_bias, backbone.layer3.2.conv1.filter, backbone.layer3.1.conv1.filter, backbone.layer3.2.conv3.filter, backbone.layer4.2.conv3.filter, neck.lateral_convs.0.conv.filter, backbone.layer2.0.conv1.filter, backbone.layer4.2.conv2.filter, backbone.layer2.2.conv1.filter, backbone.layer3.5.conv3.filter, backbone.layer2.2.conv2.filter, neck.fpn_convs.0.conv.expanded_bias, backbone.layer3.0.downsample.0.filter, backbone.layer3.4.conv2.filter, backbone.layer4.1.conv2.filter, backbone.layer3.0.conv2.filter, backbone.layer2.3.conv2.filter, backbone.layer2.2.conv3.filter, backbone.layer3.5.conv1.filter, backbone.layer3.4.conv3.filter, backbone.layer3.5.conv2.filter, neck.fpn_convs.0.conv.filter, backbone.layer3.0.conv3.filter, neck.lateral_convs.1.conv.filter, backbone.layer3.3.conv2.filter, neck.lateral_convs.0.conv.expanded_bias, backbone.layer4.1.conv3.filter, backbone.layer3.0.conv1.filter, backbone.layer3.3.conv1.filter, neck.lateral_convs.2.conv.expanded_bias, backbone.layer3.1.conv3.filter, backbone.layer4.0.conv3.filter, backbone.layer4.1.conv1.filter, backbone.layer3.1.conv2.filter, backbone.layer4.2.conv1.filter

nishit01 avatar Apr 03 '21 11:04 nishit01

It is normal and will not affect the inference. The key filter in state_dicts is automatically generated by the equivariant networks (backbone), so we do not incldue this keys in pretrained models.

csuhan avatar Apr 03 '21 18:04 csuhan

Thanks for the quick response, I ran the pretrained model weights and tested them on dotav1 and the evaluation server gave 79.07 as mAP. However, the table in the repo shows 80.10 as mAP? Any particular reason why such a difference?

nishit01 avatar Apr 05 '21 19:04 nishit01

I tested it again, and the mAP is still 80.10.

This is your result for task 1:

mAP: 0.8010117619495942
ap of each class: plane:0.8881346311907455, baseball-diamond:0.824750605801873, bridge:0.6082657998640093, ground-track-field:0.8082372242336249, small-vehicle:0.7833569408756088, large-vehicle:0.8606413546155858, ship:0.8831357632636117, tennis-court:0.9086819637957728, basketball-court:0.8877270279684082, storage-tank:0.8702894194822861, soccer-ball-field:0.686519111195622, roundabout:0.6690345957872341, harbor:0.7926069846448134, swimming-pool:0.7970535539628321, helicopter:0.746741452561886

csuhan avatar Apr 06 '21 02:04 csuhan

@csuhan , thanks for the result, I saw there exists 2 different preparation file for dota and I used prepare_dota1.py to get evaluation(assuming I am supposed to run v2 file). Now, I ran the pretrained model after preparing the dota1 dataset with prepare_dota1_v2.py and achieved mAP of 0.8003

On Dota1 Evaluation Server This is your result for task 1: mAP: 0.8003223758443455 ap of each class: plane:0.8891563080727822, baseball-diamond:0.8205706142245767, bridge:0.6115886683032841, ground-track-field:0.8090655455308038, small-vehicle:0.7839402010917558, large-vehicle:0.8590361035689292, ship:0.883116373769887, tennis-court:0.9087448511278802, basketball-court:0.8837586162308733, storage-tank:0.8690893292663793, soccer-ball-field:0.6864787158507009, roundabout:0.6679269799005486, harbor:0.7917359722346857, swimming-pool:0.8009884681482694, helicopter:0.7396388903438278

Above results are tried on pretrained model

Meanwhile, I started training the model, strangely I achieved mAP of 0.68 with prepare_dota1.py and 0. 72 with prepare_dota1_v2.py mAP: 0.7229727286621835 ap of each class: plane:0.8793550178425664, baseball-diamond:0.7831310420982861, bridge:0.5242178063392356, ground-track-field:0.7191150720511676, small-vehicle:0.7619417172325117, large-vehicle:0.7630544952882485, ship:0.8584177705278637, tennis-court:0.9086126274124283, basketball-court:0.8213600673461615, storage-tank:0.8170450709135034, soccer-ball-field:0.5098147396053461, roundabout:0.5417638218214739, harbor:0.7466082333020225, swimming-pool:0.6579710713285595, helicopter:0.5521823768233763

Don't know why such low mAP on training from scratch ... need your advice in this

Following steps, I have used to train the model for the dota1 dataset

  1. Downloaded ReRes50 model
  2. Executed python DOTA_devkit/prepare_dota1_v2.py ... which formed trainval1024 and test1024 folder
  3. Train with python tools/train.py config configs/ReDet/ReDet_re50_refpn_1x_dota1.py
  4. Parse the results with python tools/parse_results.py --config configs/ReDet/ReDet_re50_refpn_1x_dota1.py --type OBB
  5. Evaluated Task1_results_nms in dota1 evaluation server

Similar steps I followed while trying for prepare_dota1.py where I received mAP of 0.68 whereas with prepare_dota1_v2.py I achieved 0.72

Let me know your thoughts.

nishit01 avatar Apr 11 '21 22:04 nishit01

For your step 3, all our configs are prepared for 4GPUs training while you are using 1 single GPU.

Note the learning rate schedule (as shown in our paper):

  • For 4GPU dist training: 0.01lr=0.0025lr*4GPU
  • For 1GPU training: 0.0025lr=0.0025lr*1GPU

csuhan avatar Apr 12 '21 03:04 csuhan

Hi, @nishit01. There is something wrong with our ReResNet pretrained models. We have updated it. See https://github.com/csuhan/ReDet/commit/f01eb2ac0ea90bac47626feb22d42db26b0d6da4 and https://github.com/csuhan/ReDet/commit/88f8170db12a34ec342ab61571db217c9589888d

csuhan avatar Apr 13 '21 09:04 csuhan

@csuhan, thanks for informing me on this .. I will try with the updated pretrained model and will update you with results in the coming days.

nishit01 avatar Apr 13 '21 09:04 nishit01

@csuhan, thanks the updated pretrained model for ReResNet is giving dotav1 mAP of 0.76 as mentioned in the repo

nishit01 avatar Apr 17 '21 06:04 nishit01