mmdetection
mmdetection copied to clipboard
How to get the instance segmentation of mask2fomrer after training model on coco?
Hi, I use coco pre-trained model (swin-tiny from model zoo) to train coco for instance segmentation. But when I use the following command: python test.py --show-dir I get the segmentation result.
However, if I directly use pre-trained model from model zoo to test, I get the instance segmentation result.
So, what should I do to get instance segmentation after fine-tuned? Thanks
Sorry for don't get your questions. The pre-trained model will do the same actions as the fine-tuned model. Could you provide more information?
My command: python train.py and the config is "mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco.py"
I also use "mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco.pth" to train coco.
"mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco.pth" is downloaded from your github.
Then I do the test with "latest.pth", I use show-dir command.
The result is segmentation result which means every person is the same color.
But if I directly use "mask2former_swin-t-p4-w7-224_lsj_8x2_50e_coco.pth" to test without training, I got the instance segmentation result which means every person is different color.
I want to get instance segmentation result.
So I want to ask which steps is wrong, thanks!!
Every object with the same class will share the same color because we define PLATTE for each class in the dataset. So the behavior of the fine-turned mode is normal. But the behavior of the pre-trained mode and the difference between the two models are wired. I will test it as soon as possible.