mmsegmentation icon indicating copy to clipboard operation
mmsegmentation copied to clipboard

OpenMMLab Semantic Segmentation Toolbox and Benchmark.

Results 566 mmsegmentation issues
Sort by recently updated
recently updated
newest added

1 数据集中标签没有255,为什么预测的结果存在255. 2 为什么每张切片影像的预测结果里有标签字体。

Thanks for your error report and we appreciate it a lot. **Checklist** 1. I have searched related issues but cannot get the expected help. 2. The bug has not been...

I'm trying to replace the decoder head of swin transformer, but I got the KeyError and I don't know how to solve it: ``` 2023-04-10 14:34:46,693 - mmseg - INFO...

According to the original paper, the decoder channel is 256 for B0 and B1, and 768 for B2 to B5: However, in MMSegmentation, this number is 256 for all models...

PS E:\pycharm\network\mmsegmentation> & D:/software/anaconda3/install/envs/mmlab/python.exe e:/pycharm/network/mmsegmentation/tools/analysis_tools/get_flops.py e:\pycharm\mmsegmentation\mmseg\models\builder.py:36: UserWarning: ``build_loss`` would be deprecated soon, please use ``mmseg.registry.MODELS.build()`` warnings.warn('``build_loss`` would be deprecated soon, please use ' e:\pycharm\mmsegmentation\mmseg\models\losses\cross_entropy_loss.py:249: UserWarning: Default ``avg_non_ignore`` is False, if...

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand...

https://github.com/open-mmlab/mmsegmentation/blob/b040e147adfa027bbc071b624bedf0ae84dfc922/configs/_base_/datasets/loveda.py#L10C2-L10C27 The original size of the image in LoveDa is 1024*1024 why the img_scale is set to (2048, 512) here?

from mmengine.runner import Runner runner = Runner.from_cfg(cfg) I am trying to train a the U-MixFormer. How do i enable multi gpu training in my notebook

When i trained vit_vit-b16_mln_upernet_8xb2-80k_cag-512x512.py with pretrained checpoint "pretrain/jx_vit_base_p16_224-80ecf9dd.pth" i got the warning "Resize the pos_embed shape from torch.Size([1, 197, 768]) to torch.Size([1, 1025, 768])". Is it okay to ignore this...

Thank author for this work. I want to get predict probability of image mask, and look the before issue find a same issue, but I use different version of mmcv...