mmdetection icon indicating copy to clipboard operation
mmdetection copied to clipboard

error when using rtmdet instance segmentation model with SemiBaseDetector

Open zjhthu opened this issue 1 year ago • 5 comments

Describe the bug I got the error when using RTMDet instance segmentation model with SemiBaseDetector for semi-supervised learning.

    main()                                                   
  File "tools/train.py", line 129, in main                                                                                                                                                                                                             
    runner.train()                                           
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1706, in train
    model = self.train_loop.run()  # type: ignore
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run                                                                                                                                             
    self.run_epoch()                                         
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
    self.run_iter(idx, data_batch)                         
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter
    outputs = self.runner.model.train_step(                                                                                
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
    losses = self._run_forward(data, mode='loss')  # type: ignore
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 326, in _run_forward
    results = self(**data, mode=mode)                                                                                      
  File "/data/project/zjh/openmmlab/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/base.py", line 92, in forward
    return self.loss(inputs, data_samples)                   
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/semi_base.py", line 89, in loss
    losses.update(**self.loss_by_pseudo_instances(
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/semi_base.py", line 137, in loss_by_pseudo_instances
    losses = self.student.loss(batch_inputs, batch_data_samples)
  File "/data/project/zjh/mmdetection/mmdet/models/detectors/single_stage.py", line 78, in loss
    losses = self.bbox_head.loss(x, batch_data_samples)
  File "/data/project/zjh/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 123, in loss
    losses = self.loss_by_feat(*loss_inputs)
  File "/data/project/zjh/mmdetection/mmdet/models/dense_heads/rtmdet_ins_head.py", line 751, in loss_by_feat
    loss_mask = self.loss_mask_by_feat(mask_feat, flatten_kernels,
  File "/data/project/zjh/mmdetection/mmdet/models/dense_heads/rtmdet_ins_head.py", line 630, in loss_mask_by_feat
    pos_gt_masks = torch.cat(pos_gt_masks, 0)
RuntimeError: Sizes of tensors must match except in dimension 1. Got 256 and 249 (The offending index is 0)

It seems the pseudo-masks generated by the teacher network do not work with the student network. I checked the pos_gt_masks variable, all masks are empty but with different shapes:

[tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 161, 161), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 199, 199), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 162, 162), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 198, 198), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 186, 186), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 163, 163), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 133, 133), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 185, 185), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 153, 153), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 129, 129), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 186, 186), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 187, 187), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 219, 219), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 141, 141), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 128, 128), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 227, 227), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 224, 224), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 212, 212), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 178, 178), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 195, 195), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 210, 210), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 133, 133), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 256, 256), dtype=torch.bool), tensor([], device='cuda:0', size=(0, 141, 141), dtype=torch.bool)]
> /data/project/zjh/mmdetection/mmdet/models/dense_heads/rtmdet_ins_head.py(636)loss_mask_by_feat()

Reproduction

I will upload the config if needed. The experiments are based on the customized dataset.

Environment

sys.platform: linux Python: 3.8.10 (default, Mar 13 2023, 10:26:41) [GCC 9.4.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: Tesla T4 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.2, V11.2.152 GCC: x86_64-linux-gnu-gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 PyTorch: 1.9.0+cu102PyTorch compiling details: PyTorch built with: - GCC 7.3 - C++ Version: 201402 - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications

  • Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 10.2
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70
  • CuDNN 7.6.5
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.10.0+cu102 OpenCV: 4.7.0 MMEngine: 0.7.2 MMDetection: 3.0.0+ecac3a7

zjhthu avatar Apr 20 '23 02:04 zjhthu

I run another experiment that only performs the object detection task, no error was encountered. I will check the difference between these two tasks.

zjhthu avatar Apr 20 '23 03:04 zjhthu

The root is this line, RTMDet generates masks using img_shape. My data augmentation config comes from the semi_coco_detection. This config does not pad the image which will make img_shape = resize_shape. After adding the pad operation dict(type='Pad', size=image_size, pad_val=dict(img=(pad_val, pad_val, pad_val))), , the error disappears.

zjhthu avatar Apr 20 '23 06:04 zjhthu

But I am wondering why there is no error when I did not pad the image? Will MMDet pad the image itself?

zjhthu avatar Apr 20 '23 06:04 zjhthu

RTMDet requires a fixed size of the input picture, while the size of the semi-supervised input picture is random. At present, the part of semi-supervised learning does not support instance segmentation

Czm369 avatar Jun 26 '23 16:06 Czm369

Any progress of semi-supervised learning on instance segmentation (like rtmdet)? Thanks.

NIKEmissa avatar Feb 04 '24 06:02 NIKEmissa