EMSANet
EMSANet copied to clipboard
Cityscapes Visualization
Dear author,
I conducted training using the Cityscapes dataset and attempted to visualize the results using the "inference_samples.py" script. However, I encountered an issue with the instance segmentation output, wherein a single object appeared to be split into multiple distinct objects. This problem also had an impact on the semantic (panoptic) segmentation result.
How can I properly visualize the instance segmentation for cityscapes?
Thank you so much for your time.
All foreground pixels (pixels that belong to thing semantic classes) are assigned to instance centers. However, your semantic segmentation seems to be pretty bad - there is a lot of noise in the lower half of the image. I guess there is something wrong. So:
- which input resolution did you use?
- did you change semantic class weighing to linear? (very important for cityscapes - see ESANet parameters here)
- appm as context module might further help with variying input resolutions
To further reduce bad instance assignments, you can have a look at the --instance-offset-distance-threshold
parameter here. Especially for any real-world application, it might to useful to assign void.
However, panoptic instances in your example image look stange, there is something wrong - there is only one center for the center car, but the assignment is splitted in muliple instances.
I can remember a similar output, when we messed up undoing offset normalizing in nicr_mt_scene_analysis/model/postprocessing/instance.py