CoinCheung
CoinCheung
Please check here: https://github.com/CoinCheung/BiSeNet/blob/f9231b7c971413e6ebdfcd961fbea53417b18851/lib/cityscapes_cv2.py#L67 If your label values are within (0,1,2), you need to bypass these lines about label value mapping.
Have you checked every label image to make sure every label image is within [0-2] ?
I mean, have you checked one by one, and make sure each picuture's value are within [0-2]? Better try with `cv2.imread(pth, 0)`: https://github.com/CoinCheung/BiSeNet/blob/f9231b7c971413e6ebdfcd961fbea53417b18851/lib/base_dataset.py#L53
These are pytorch native operators, which is unlikely to have memory problem. What is the batch size did you use to train your model?
Hi, I used the trainId following this: https://github.com/mcordts/cityscapesScripts/blob/aeb7b82531f86185ce287705be28f452ba3ddbb8/cityscapesscripts/helpers/labels.py#L64 Here it said ignored trainId is 255.
That is the requirement of the dataset, you may check [here](https://github.com/CoinCheung/BiSeNet/blob/81a88852fa50eee96738d72f4dbfa2de108bf3e9/bisenetv2/cityscapes_cv2.py#L17) for the category mapping.
There are 30+ categories in the original image, but we only use 19 of them to train and do evaluation. Some of the original 30+ categories are thus mapped to...
What result is wrong? Would you be more specific?
Have you compared the values?
This should bring no difference, since input shape is (1,1).