mmsegmentation
mmsegmentation copied to clipboard
42 miou was supposed to come out, but 35 miou came out
i'm not using mmsegmentation pipeline but i run the pretrained model which came from mmsegmentation. but the performance did not come out. 42 miou was supposed to come out, but 35 miou came out. How do you evaluate?
i download pretrained model from https://github.com/open-mmlab/mmsegmentation/tree/master/configs/upernet (upernet_r101_512x512_80k_ade20k_20200614_185117-32e4db94.pth)
Please explain in detail. in my case, , only normalization of the ade20k validation image is fired and put into the model. Resize seg_logit to the original image size and make predictions through argmax.
Can you tell me specifically how the image is processed during evaluation and how the seg_logit is processed?
I think the performance reduction is caused by removing resize, as pre-trained models in mmseg follow the data augmentation setting with resize and we test them with resize also
https://github.com/open-mmlab/mmsegmentation/blob/dd42fa8d0125632371a41a87c20485494c973535/configs/base/datasets/ade20k.py#L10
https://github.com/open-mmlab/mmsegmentation/blob/dd42fa8d0125632371a41a87c20485494c973535/configs/base/datasets/ade20k.py#L23
thanks @MeowZheng when validating 1.Resize(img_scale=(2048, 512), keep_ratio=true) 2.Normalize(mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
model output: (batch_size, num_classes, (Resize_shape)h, (Resize_shape )w)
and we resize model output(e.g., seg_logit) to (batch_size, num_classes, (original_shape)H, (original_shape)W)
and evaluating?
Is what I said right now correct?
Hi @MeowZheng
i 've done resize but still do not get performace.
Is it correct to get the categorical class iou list through the mean_iou code and then do nanmean to get the total miou?
Is it correct to then average over all vlidation images
?