Xueyan Zou

Results 49 comments of Xueyan Zou

please refer to mask2former for panoptic segmentation dataset preparation: https://github.com/facebookresearch/Mask2Former xxx.png is the panoptic ground truth.

Sorry, I never meet with this bug before, it may caused by nltk version problem.

If you only want to have panoptic segmentation results, the model of X-Decoder is no worse than SEEM, feel free to try the script: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/blob/v1.0/inference/xdecoder/infer_panoseg.py

I will try to support SEEM directly : )

For open-set segmentation, please simply use X-Decoder the inference script is at: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/blob/v1.0/inference/xdecoder/infer_panoseg.py

I currently do not have a test script for SEEM, but you could try to modify from X-Decoder: https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once/tree/v1.0/inference/xdecoder

Our demo use Davit-d5 backbone which is different from Focal-L

Thanks so much for the comments, could you please provide some example images? Again, for referring segmentation, I highly suggesting of using X-Decoder instead f seem, as SEEM is ONLY...

Evaluation with 1-gpu is because if we concatenate images in a single batch, e.g. one image with [512, 1024], another image with [1024, 512], the concatenated batch would be [2,...

> > > > I got the same question. > > > > > > > > > Friend, have you resolved the issue? I feel like the downloaded data...