TorchSeg
TorchSeg copied to clipboard
Question about the performances
I note that the labels you use in eval.py of cityscapes.bisenet are downsampled, and I reproduce your result of 74 miou of bisenet.r18.speed with the 8x downsampled label. And if I set the gt_downsample=1, the performance will drop. I wan to know if the performances you post are based on the low-scale labels?
The same question. How to reproduce your results using resent18 (78 mIoU).
@chenxiaoyu523 Yes. In the speed experiments, we use the low-scale labels for the fast inference speed, which is mentioned in our paper.
@lxtGH Based on the setting and pre-trained model, you can reproduce the performance. Now, because our repo contains both distributed-training method and non-distributed-training method, the setting may be a little confused. I will abandon the non-distributed-training method and give a pure distributed training setting for reproduction and make the pre-trained models available in the next version.
@chenxiaoyu523 Can you please provide details about the settings used to reproduce the 74 miou result on cityscapes with bisenet.r18.speed?
@cpapaionn i guess you changed the batch_size in config.py. i tested the code on different settings of batch_size and get miou 68 on batch_size 4, 72 on 8 and 74.8 on 16. If you have only one display card, you can try this: https://discuss.pytorch.org/t/how-to-implement-accumulated-gradient-in-pytorch-i-e-iter-size-in-caffe-prototxt/2522
@chenxiaoyu523 yes I did change the batch size to 4 as I only used a single gpu, I 'll try this, thank you very much!!
@chenxiaoyu523 Yes. In the speed experiments, we use the low-scale labels for the fast inference speed, which is mentioned in our paper.
Hi @ycszen! Did you mean you used labels at 1/8 scale to calculate mIoU? Also could you please kindly point out where did you mention this in your paper? Thank you very much!