PSPNet
PSPNet copied to clipboard
Ground truth and prediction labels mismatch
I am using the ground truth data downloaded from the Cityscapes webpage
https://www.cityscapes-dataset.com/
The filenames of the ground truth annotations from Cityscapes end in "gtFine_labelIds", while in the evaluation file for the validation set I have noticed you were looking for annotations ending in "gtFine_labelTrainIds". Where did you get those for the validation set?
I am sure you were using different indices for the classes compared to what can be downloaded from cityscapes now. For example the sky class in the cityscapes ground truths has a value of 23, while the PSP-predicted label for sky is 10. And that happens for all classes.
When running the eval_acc function the performance metrics computed are then obviously wrong. Do you have a remapping from the current cityscapes labels to the ones you were using?
Ok I have found the answer in the Cityscapes repo, check this file for mappings between class labels
https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/helpers/labels.py
I would be useful to have a flag to convert the grayscale prediction to LabelIDs instead of LabelTrainIDs before saving them.
Hey,
I stumbled above the same problem and found a handy script to convert the gt annotations to needed "gtFine_labelTrainIds": https://github.com/mcordts/cityscapesScripts/blob/master/cityscapesscripts/preparation/createTrainIdLabelImgs.py
just export CITYSCAPES_DATASET=< path to dataset root > and execute the script ...
pjohh's solution works, thanks.
There's the same problem with the ADE20K dataset, while I couldn't find the file to explain the relation between train_id and id. Does anybody know how to solve this problem?