DotWang
DotWang
@lywang76 Usually, people don’t care if semantic segmentation datasets have background. Since they both can be used to train the model, here I show two examples: CV field: Dual Attention...
@lywang76 1. ADE20K is artificially annotated by experts. Theoretically, you can manully annotate any classes if you want. SAMRS is automatically generated by transformation. Its categories depend on original RS...
@lywang76 Vegetation, road and water are common RS classes. It is normal for LoveDA to annotate these categories. But this does not mean that trees and water must appear on...
@lywang76 I understand what you mean. As I know, nobody says the semantic segmentation must simultaneously contain stuff and thing, it only involves a pixel-level classification. Instance segmenation will further...
@lywang76 I have emphasized that: ### If you insist on identifying trees. You can manully transform other object detection datasets that have labeled trees (if you can find it) using...
@lywang76 In addition, initially, this dataset is built for pretraining. At this time, not all pixels need to be annotated. In our experiments, the models pretrained on the SAMRS have...
@Seth-Park We use the default setting.
@lywang76 The released labels are semantic labels, which cannot be used for distinguishing different objects but for different categories. Figure 6 is a bar plot map, do you mean Figure...
@lywang76 OK, the codes will be released after a few days.
@lywang76 The codes of generating SAMRS dataset have been released, thanks for your attention.