squeezeDet
squeezeDet copied to clipboard
Very low mAP
I did the evaluation on val set (3741 images) according to your guidance with the squeezeDet model (model.ckpt-87000). However, i got very low mAP: Average precisions: car_easy: 0.144 car_medium: 0.122 car_hard: 0.120 pedestrian_easy: 0.299 pedestrian_medium: 0.275 pedestrian_hard: 0.251 cyclist_easy: 0.272 cyclist_medium: 0.200 cyclist_hard: 0.205 Mean average precision: 0.210 I do not suspect the result presented on the paper. There must be something wrong in my experiment. Can you figure it out and give me some help?
@aswwqhome1993 If you could provide more details, it would be helpful.
- Did you re-train the model? If so, how many iterations did you train? Did you make any modifications to the model?
- If you used pretrained model, which one did you use?
- How did you split the train-val set?
- ...
I just use the pretrained model you proposed (model.ckpt-87000) and do not re-train the model. I also use random_split_train_val.py to split the train-val set.
I can confirm these low results with the provided squeezeDet model. The squeezeDet+ model achieves a 0.821 mean average precision, but the squeezeDet model under performs. Perhaps an incorrect model was uploaded?
Seems that the model is somehow broken. I will train a new model and update it in next few days. Thanks.
@BichenWuUCB ,@aswwqhome1993, is this issue solved ?
@BichenWuUCB Hi, I also have similar question. Instead training on KITTI or VOC, I train it in my own dataset which have VOC structure. The loss decrease in the first 1000 step but then go flatten. I stop the training at step 7000 and test the model. All the result boxes have very low confidence, and they all lay at the corner of the images. Not sure why it will happen, would you give any suggestions about it?
@HoracceFeng Hi Feng! I'm now just doing the same thing as you did, apply squeezedet to VOC or VOC-like database. I made some changes to the codes and met exactly the same problems as yours -- loss fixed after hundreds of steps, very low confidence when running eval.py. So have you figured out how to solve those problems?
Sorry, can’t help in this issue. I cannot solve the problem, so then I try some other network...
Sent from my iPhone
On Jun 26, 2018, at 22:38, alan20062006 <[email protected]mailto:[email protected]> wrote:
@HoracceFenghttps://github.com/HoracceFeng Hi Feng! I'm now just doing the same thing as you did, apply squeezedet to VOC or VOC-like database. I made some changes to the codes and met exactly the same problems as yours -- loss fixed after hundreds of steps, very low confidence when running eval.py. So have you figured out how to solve those problems?
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://github.com/BichenWuUCB/squeezeDet/issues/61#issuecomment-400333089, or mute the threadhttps://github.com/notifications/unsubscribe-auth/AT2_6Wf37TK2yD33YR0UuIbdRqgOHOM-ks5uAkdogaJpZM4N9ipQ.
@HoracceFeng The thing is, if you are using your own customized dataset, you need to modify and set those anchors based on the distribution of your own dataset. Also, don't forget to recompile the make file in the src/dataset/kitti-eval. I did that and it works.