TorchSeg icon indicating copy to clipboard operation
TorchSeg copied to clipboard

Question about the performances

Open chenxiaoyu523 opened this issue 6 years ago • 7 comments

I note that the labels you use in eval.py of cityscapes.bisenet are downsampled, and I reproduce your result of 74 miou of bisenet.r18.speed with the 8x downsampled label. And if I set the gt_downsample=1, the performance will drop. I wan to know if the performances you post are based on the low-scale labels?

chenxiaoyu523 avatar Feb 26 '19 10:02 chenxiaoyu523

The same question. How to reproduce your results using resent18 (78 mIoU).

lxtGH avatar Mar 04 '19 11:03 lxtGH

@chenxiaoyu523 Yes. In the speed experiments, we use the low-scale labels for the fast inference speed, which is mentioned in our paper.

yu-changqian avatar Mar 05 '19 07:03 yu-changqian

@lxtGH Based on the setting and pre-trained model, you can reproduce the performance. Now, because our repo contains both distributed-training method and non-distributed-training method, the setting may be a little confused. I will abandon the non-distributed-training method and give a pure distributed training setting for reproduction and make the pre-trained models available in the next version.

yu-changqian avatar Mar 05 '19 07:03 yu-changqian

@chenxiaoyu523 Can you please provide details about the settings used to reproduce the 74 miou result on cityscapes with bisenet.r18.speed?

cpapaionn avatar Mar 21 '19 11:03 cpapaionn

@cpapaionn i guess you changed the batch_size in config.py. i tested the code on different settings of batch_size and get miou 68 on batch_size 4, 72 on 8 and 74.8 on 16. If you have only one display card, you can try this: https://discuss.pytorch.org/t/how-to-implement-accumulated-gradient-in-pytorch-i-e-iter-size-in-caffe-prototxt/2522

chenxiaoyu523 avatar Mar 21 '19 12:03 chenxiaoyu523

@chenxiaoyu523 yes I did change the batch size to 4 as I only used a single gpu, I 'll try this, thank you very much!!

cpapaionn avatar Mar 21 '19 12:03 cpapaionn

@chenxiaoyu523 Yes. In the speed experiments, we use the low-scale labels for the fast inference speed, which is mentioned in our paper.

Hi @ycszen! Did you mean you used labels at 1/8 scale to calculate mIoU? Also could you please kindly point out where did you mention this in your paper? Thank you very much!

chenwydj avatar Jul 31 '19 21:07 chenwydj