PANet icon indicating copy to clipboard operation
PANet copied to clipboard

Questions on benchmark

Open JrPeng opened this issue 6 years ago • 29 comments

I find that you choose to use 'roi_Xconv1fc_gn_head_panet' as head in your model when training R50-PANet and get a 39.8 box mAP. I tried to train a panet using '2fc_mlp_head' with PA-structure unchanged and got a 37.1 box mAP, which is only 0.4 above the fpn baseline. Is there something wrong with this implementation, or is there any probability that PA-net has to be combined with a heavy head to have a good performance?

JrPeng avatar Sep 18 '18 06:09 JrPeng

Hi,

I have uploaded the model trained with 2fc. The performance should be 39.6 box AP. Please try with the new model and corresponding config file.

Thanks!

ShuLiu1993 avatar Sep 18 '18 14:09 ShuLiu1993

Hi @ShuLiu1993 , thanks for this wonderful work! I wonder if pretrained model on Cityscapes can be provided(or converted) in this PyTorch implementation? Thanks!

JiamingSuen avatar Sep 18 '18 16:09 JiamingSuen

Thank you for your response, I will try with your advice. : )

JrPeng avatar Sep 19 '18 05:09 JrPeng

I tried to train the panet on res50-fpn without using any gn to get a precise evaluation on the influence of PA-structure and get the 37.1 bbox mAP. I find that your new model still use GN in both fpn-part and the 2fc-head. Does it mean that GN is really important for training PA-structure?

JrPeng avatar Sep 19 '18 07:09 JrPeng

@JiamingSuen Thanks a lot for your interest! Sorry that I really don't have much time on this currently. Maybe you can try this by yourself since the pre-trained model on COCO is already released.

ShuLiu1993 avatar Sep 19 '18 07:09 ShuLiu1993

@JrPeng In our origin implementation, we used Sync BN. In this version, we us GN instead. Using this kind of normalization can help the network converge better and achieve better performance.

ShuLiu1993 avatar Sep 19 '18 08:09 ShuLiu1993

@ShuLiu1993 I can totally understand, thanks for your reply!

JiamingSuen avatar Sep 19 '18 08:09 JiamingSuen

Thanks for your reply. I will try to train a model with BN-backbone and gn-FPN/heads without using PA-structure to compare with the PA-net to give a better evaluation on this structure.

JrPeng avatar Sep 19 '18 08:09 JrPeng

@ShuLiu1993 I'm trying to reproduce cityscapes training with COCO pretrained model, and I'm currently having 34.4 mask AP on Cityscapes validation set. I used the finetuing schedule described in the Mask-RCNN paper(4k iter in which reduce lr at 3k with initial lr 0.01 and 8 batch size). Did you choose the same finetuning steps and lr schedule in PANet? Thanks very much.

JiamingSuen avatar Sep 29 '18 21:09 JiamingSuen

@ShuLiu1993 Any comment please? I didn't find any reference about finetuning parameters in your paper. Thanks very much!

JiamingSuen avatar Oct 06 '18 10:10 JiamingSuen

@JiamingSuen Sorry for the late reply. We have listed several hyper-parameters in our paper, which are borrowed from Mask R-CNN. In other words, we strictly followed the training parameters used by Mask R-CNN.

ShuLiu1993 avatar Oct 08 '18 06:10 ShuLiu1993

@JrPeng Hi, any progress about training the model with BN-backbone and gn-FPN/heads without using PA-structure ? Thanks!

Panxjia avatar Nov 12 '18 09:11 Panxjia

@JiamingSuen can you give me a short tutorial on how to finetune the pretrained coco model? I would like to finetune the model for pedestrian detection on the Kitti Dataset but I don't know where to start.

Z0org avatar Dec 02 '18 14:12 Z0org

You may start by trying to adapt cityscapes loader/model to KITTI since KITTI segmentation dataset is using cityscapes format. Cityscapes tools In this codebase would be helpful.

JiamingSuen avatar Dec 03 '18 04:12 JiamingSuen

@Panxj I find that baseline of this implementation is higher than baseline of Detectron.pytorch by Roy. For example, res50-FPN without GN in this implementation has a 38.3box AP while it has a 37.7box AP in Roy's implementation. Besides, I add only PA structure in Roy's code, and find it only contributes 0.3AP improvment, but GN/adaptive fusion contributes a lot. Maybe there is something wrong with my implementation. You can try yourself and we can discuss based on your results.

JrPeng avatar Dec 06 '18 06:12 JrPeng

@ShuLiu1993 thank you for sharing your work. I had run the ablation on PAnet with panet_R-50-FPN_1x_det_2fc.yaml config file. I also found that this repo has better baseline compared to original pytorch repo or official detectron in caffe. Without any GN, buttom up path, ada feature pool and multi scale training, this repo can achieve 37.9 mAP, it is higher than official detectron by 1.2 mAP. It seems like your original panet version using sync BN has different improvement factors. But could you please clear us why here we get so good baseline :D.

GN BU path ada pool ms train Result Note
x x x x 39.6 Default panet_R-50-FPN_1x_det_2fc.yaml
x x x 39
configSCALES=(1000,)
x x 38.5
configSCALES=(1000,)
ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn
x 38.4
configSCALES=(1000,)
ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn
CONV_BODY:FPN.fpn_ResNet50_conv5_body
37.9
configSCALES=(1000,)
ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn
CONV_BODY:FPN.fpn_ResNet50_conv5_body
FPN.USE_GN=False

thangvubk avatar Dec 11 '18 04:12 thangvubk

Hi guys, thanks a lot for your interests!

This codebase is heavily based on Detectron.pytorch by Roy. In this codebase and released configs, I used multi-scale training larger testing scale as I noted in the paper. This may be the main reason that the baseline is with a better performance. I also made some minor modifications may also contribute a little bit.

With GN or SyncBN do help PANet to achieve a better performance with other settings kept the same. They can help the network to converge better. So that's why we should try GN or SyncBN first. Currently I haven't compared GN and SyncBN under the same codebase. But I think SyncBN will achieve comparable or even better performance.

ShuLiu1993 avatar Dec 14 '18 05:12 ShuLiu1993

hi @thangvubk i use bu-path, ada-pool, and ms-train, without gn, and test use MIN_SIZE_TEST: 1000, only get 38.1 mAP. is there something i miss?

zimenglan-sysu-512 avatar Dec 27 '18 06:12 zimenglan-sysu-512

@zimenglan-sysu-512 just use panet_R-50-FPN_1x_det_2fc.yaml config file and dont make any modifications.

thangvubk avatar Dec 27 '18 06:12 thangvubk

hi @thangvubk do u try soft-nms instead of standard-nms? if use, can u share the results here?

zimenglan-sysu-512 avatar Dec 27 '18 07:12 zimenglan-sysu-512

@zimenglan-sysu-512 For bu_path + ada-pool + ms, i didnt make any modification, just clone the network and train. Did you do the same thing?

thangvubk avatar Dec 27 '18 07:12 thangvubk

i did not use this repo, i just add some code of panet to maskrcnn-benchmark (w/o GN).

zimenglan-sysu-512 avatar Dec 27 '18 07:12 zimenglan-sysu-512

@zimenglan-sysu-512 It is hard to say when you implement in other repo. Ussually, it doesnt work as expected due to underlining implementation details. You can try to add GN to see if their is any improvement.

thangvubk avatar Dec 27 '18 07:12 thangvubk

thanks @thangvubk i will try to add GN to FPN and heads.

zimenglan-sysu-512 avatar Dec 27 '18 07:12 zimenglan-sysu-512

hi @thangvubk i add GN to FPN and RoI head, but the performance only gets 38.3%. btw, when training, since the gpu memory limits, so each gpu holds 1 image.

zimenglan-sysu-512 avatar Jan 04 '19 10:01 zimenglan-sysu-512

@zimenglan-sysu-512 i'm not sure where u are wrong. Btw, mmdet is planing to release PANet also, see here.

thangvubk avatar Jan 04 '19 11:01 thangvubk

hi @thangvubk i set two images per gpu, the performance can get 0.388, which is still a little gap comparing with the paper.

zimenglan-sysu-512 avatar Jan 07 '19 04:01 zimenglan-sysu-512

hi @thangvubk i guess the performance that drops a little is due to that i use 2fc instead of Xconv1fc.

zimenglan-sysu-512 avatar Jan 22 '19 02:01 zimenglan-sysu-512

@zimenglan-sysu-512 I also re-implement the panet using maskrcnn_benchmark(2mlp without ms train, gn, sbn). But the performance is worse than this repo. If convenient, can you share your code with me? I just want to make sure whether there is something wrong with my code.

LaoYang1994 avatar Mar 23 '19 06:03 LaoYang1994