PANet
PANet copied to clipboard
Questions on benchmark
I find that you choose to use 'roi_Xconv1fc_gn_head_panet' as head in your model when training R50-PANet and get a 39.8 box mAP. I tried to train a panet using '2fc_mlp_head' with PA-structure unchanged and got a 37.1 box mAP, which is only 0.4 above the fpn baseline. Is there something wrong with this implementation, or is there any probability that PA-net has to be combined with a heavy head to have a good performance?
Hi,
I have uploaded the model trained with 2fc. The performance should be 39.6 box AP. Please try with the new model and corresponding config file.
Thanks!
Hi @ShuLiu1993 , thanks for this wonderful work! I wonder if pretrained model on Cityscapes can be provided(or converted) in this PyTorch implementation? Thanks!
Thank you for your response, I will try with your advice. : )
I tried to train the panet on res50-fpn without using any gn to get a precise evaluation on the influence of PA-structure and get the 37.1 bbox mAP. I find that your new model still use GN in both fpn-part and the 2fc-head. Does it mean that GN is really important for training PA-structure?
@JiamingSuen Thanks a lot for your interest! Sorry that I really don't have much time on this currently. Maybe you can try this by yourself since the pre-trained model on COCO is already released.
@JrPeng In our origin implementation, we used Sync BN. In this version, we us GN instead. Using this kind of normalization can help the network converge better and achieve better performance.
@ShuLiu1993 I can totally understand, thanks for your reply!
Thanks for your reply. I will try to train a model with BN-backbone and gn-FPN/heads without using PA-structure to compare with the PA-net to give a better evaluation on this structure.
@ShuLiu1993 I'm trying to reproduce cityscapes training with COCO pretrained model, and I'm currently having 34.4 mask AP on Cityscapes validation set. I used the finetuing schedule described in the Mask-RCNN paper(4k iter in which reduce lr at 3k with initial lr 0.01 and 8 batch size). Did you choose the same finetuning steps and lr schedule in PANet? Thanks very much.
@ShuLiu1993 Any comment please? I didn't find any reference about finetuning parameters in your paper. Thanks very much!
@JiamingSuen Sorry for the late reply. We have listed several hyper-parameters in our paper, which are borrowed from Mask R-CNN. In other words, we strictly followed the training parameters used by Mask R-CNN.
@JrPeng Hi, any progress about training the model with BN-backbone and gn-FPN/heads without using PA-structure ? Thanks!
@JiamingSuen can you give me a short tutorial on how to finetune the pretrained coco model? I would like to finetune the model for pedestrian detection on the Kitti Dataset but I don't know where to start.
You may start by trying to adapt cityscapes loader/model to KITTI since KITTI segmentation dataset is using cityscapes format. Cityscapes tools In this codebase would be helpful.
@Panxj I find that baseline of this implementation is higher than baseline of Detectron.pytorch by Roy. For example, res50-FPN without GN in this implementation has a 38.3box AP while it has a 37.7box AP in Roy's implementation. Besides, I add only PA structure in Roy's code, and find it only contributes 0.3AP improvment, but GN/adaptive fusion contributes a lot. Maybe there is something wrong with my implementation. You can try yourself and we can discuss based on your results.
@ShuLiu1993 thank you for sharing your work. I had run the ablation on PAnet with panet_R-50-FPN_1x_det_2fc.yaml
config file. I also found that this repo has better baseline compared to original pytorch repo or official detectron in caffe. Without any GN, buttom up path, ada feature pool and multi scale training, this repo can achieve 37.9 mAP, it is higher than official detectron by 1.2 mAP. It seems like your original panet version using sync BN has different improvement factors. But could you please clear us why here we get so good baseline :D.
GN | BU path | ada pool | ms train | Result | Note |
---|---|---|---|---|---|
x | x | x | x | 39.6 | Default panet_R-50-FPN_1x_det_2fc.yaml |
x | x | x | 39 | configSCALES=(1000,) |
|
x | x | 38.5 | configSCALES=(1000,) ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn |
||
x | 38.4 | configSCALES=(1000,) ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn CONV_BODY:FPN.fpn_ResNet50_conv5_body |
|||
37.9 | configSCALES=(1000,) ROI_BOX_HEAD=fast_rcnn_heads.roi_2mlp_head_gn CONV_BODY:FPN.fpn_ResNet50_conv5_body FPN.USE_GN=False |
Hi guys, thanks a lot for your interests!
This codebase is heavily based on Detectron.pytorch by Roy. In this codebase and released configs, I used multi-scale training larger testing scale as I noted in the paper. This may be the main reason that the baseline is with a better performance. I also made some minor modifications may also contribute a little bit.
With GN or SyncBN do help PANet to achieve a better performance with other settings kept the same. They can help the network to converge better. So that's why we should try GN or SyncBN first. Currently I haven't compared GN and SyncBN under the same codebase. But I think SyncBN will achieve comparable or even better performance.
hi @thangvubk
i use bu-path, ada-pool, and ms-train, without gn
, and test use MIN_SIZE_TEST: 1000
, only get 38.1 mAP. is there something i miss?
@zimenglan-sysu-512 just use panet_R-50-FPN_1x_det_2fc.yaml
config file and dont make any modifications.
hi @thangvubk do u try soft-nms instead of standard-nms? if use, can u share the results here?
@zimenglan-sysu-512 For bu_path + ada-pool + ms, i didnt make any modification, just clone the network and train. Did you do the same thing?
i did not use this repo, i just add some code of panet to maskrcnn-benchmark (w/o GN).
@zimenglan-sysu-512 It is hard to say when you implement in other repo. Ussually, it doesnt work as expected due to underlining implementation details. You can try to add GN to see if their is any improvement.
thanks @thangvubk i will try to add GN to FPN and heads.
hi @thangvubk i add GN to FPN and RoI head, but the performance only gets 38.3%. btw, when training, since the gpu memory limits, so each gpu holds 1 image.
@zimenglan-sysu-512 i'm not sure where u are wrong. Btw, mmdet is planing to release PANet also, see here.
hi @thangvubk i set two images per gpu, the performance can get 0.388, which is still a little gap comparing with the paper.
hi @thangvubk
i guess the performance that drops a little is due to that i use 2fc
instead of Xconv1fc
.
@zimenglan-sysu-512 I also re-implement the panet using maskrcnn_benchmark(2mlp without ms train, gn, sbn). But the performance is worse than this repo. If convenient, can you share your code with me? I just want to make sure whether there is something wrong with my code.