mmrotate icon indicating copy to clipboard operation
mmrotate copied to clipboard

[Bug] Two errors appear repectively during running GlidingVertex with AssertionError and S2ANet with KeyError

Open Danny-C-Auditore opened this issue 1 year ago • 0 comments

Prerequisite

Task

I'm using the official example scripts/configs for the officially supported tasks/models/datasets.

Branch

1.x branch https://github.com/open-mmlab/mmrotate/tree/1.x

Environment

sys.platform: linux Python: 3.8.17 | packaged by conda-forge | (default, Jun 16 2023, 07:06:00) [GCC 11.4.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0,1,2,3,4,5,6: NVIDIA GeForce RTX 3080 Ti CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.10 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0 PyTorch: 1.10.0 PyTorch compiling details: PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • LAPACK is enabled (usually provided by MKL)
  • NNPACK is enabled
  • CPU capability usage: AVX512
  • CUDA Runtime 11.3
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  • CuDNN 8.2
  • Magma 2.5.2
  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

TorchVision: 0.11.0 OpenCV: 4.8.0 MMEngine: 0.8.2 MMRotate: 1.0.0rc1+d50ab76

Reproduces the problem - code sample

GlidingVertex 07/27 13:23:27 - mmengine - INFO - Config: dataset_type = 'HRSCDataset' data_root = '/data_disk/ywh/datasets/HRSC2016' backend_args = None train_pipeline = [ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict(type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.RandomFlip', prob=0.75, direction=[ 'horizontal', 'vertical', 'diagonal', ]), dict(type='mmdet.PackDetInputs'), ] val_pipeline = [ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict(type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict(type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ] test_pipeline = [ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ] train_dataloader = dict( batch_size=2, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), batch_sampler=None, dataset=dict( type='HRSCDataset', data_root='/data_disk/ywh/datasets/HRSC2016', ann_file='/data_disk/ywh/datasets/HRSC2016/train_id_64shots.txt', data_prefix=dict( sub_data_root='/data_disk/ywh/datasets/HRSC2016/train_id/'), filter_cfg=dict(filter_empty_gt=True), pipeline=[ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict( type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict( type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.RandomFlip', prob=0.75, direction=[ 'horizontal', 'vertical', 'diagonal', ]), dict(type='mmdet.PackDetInputs'), ], backend_args=None)) val_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='HRSCDataset', data_root='/data_disk/ywh/datasets/HRSC2016', ann_file='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood.txt', data_prefix=dict( sub_data_root='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood/' ), test_mode=True, pipeline=[ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict( type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ], backend_args=None)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='HRSCDataset', data_root='/data_disk/ywh/datasets/HRSC2016', ann_file='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood.txt', data_prefix=dict( sub_data_root='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood/' ), test_mode=True, pipeline=[ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict( type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ], backend_args=None)) val_evaluator = [ dict( type='DOTAMetric', eval_mode='11points', prefix='dota_ap07', metric='mAP'), dict( type='DOTAMetric', eval_mode='area', prefix='dota_ap12', metric='mAP'), ] test_evaluator = [ dict( type='DOTAMetric', eval_mode='11points', prefix='dota_ap07', metric='mAP'), dict( type='DOTAMetric', eval_mode='area', prefix='dota_ap12', metric='mAP'), ] train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=12, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='LinearLR', start_factor=0.3333333333333333, by_epoch=False, begin=0, end=500), dict( type='MultiStepLR', begin=0, end=12, by_epoch=True, milestones=[ 8, 11, ], gamma=0.1), ] optim_wrapper = dict( type='OptimWrapper', optimizer=dict(type='SGD', lr=0.005, momentum=0.9, weight_decay=0.0001), clip_grad=dict(max_norm=35, norm_type=2)) default_scope = 'mmrotate' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=50), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='mmdet.DetVisualizationHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( type='RotLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ], name='visualizer') log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True) log_level = 'INFO' load_from = None resume = False model = dict( type='mmdet.FasterRCNN', data_preprocessor=dict( type='mmdet.DetDataPreprocessor', mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], bgr_to_rgb=True, pad_size_divisor=32, boxtype2tensor=False), backbone=dict( type='mmdet.ResNet', depth=50, num_stages=4, out_indices=( 0, 1, 2, 3, ), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch', init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), neck=dict( type='mmdet.FPN', in_channels=[ 256, 512, 1024, 2048, ], out_channels=256, num_outs=5), rpn_head=dict( type='mmdet.RPNHead', in_channels=256, feat_channels=256, anchor_generator=dict( type='mmdet.AnchorGenerator', scales=[ 8, ], ratios=[ 0.5, 1.0, 2.0, ], strides=[ 4, 8, 16, 32, 64, ], use_box_type=True), bbox_coder=dict( type='DeltaXYWHQBBoxCoder', target_means=[ 0.0, 0.0, 0.0, 0.0, ], target_stds=[ 1.0, 1.0, 1.0, 1.0, ], use_box_type=True), loss_cls=dict( type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), loss_bbox=dict( type='mmdet.SmoothL1Loss', beta=0.1111111111111111, loss_weight=1.0)), roi_head=dict( type='GVRatioRoIHead', bbox_roi_extractor=dict( type='mmdet.SingleRoIExtractor', roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), out_channels=256, featmap_strides=[ 4, 8, 16, 32, ]), bbox_head=dict( type='GVBBoxHead', num_shared_fcs=2, in_channels=256, fc_out_channels=1024, roi_feat_size=7, num_classes=15, ratio_thr=0.8, bbox_coder=dict( type='DeltaXYWHQBBoxCoder', target_means=( 0.0, 0.0, 0.0, 0.0, ), target_stds=( 0.1, 0.1, 0.2, 0.2, )), fix_coder=dict(type='GVFixCoder'), ratio_coder=dict(type='GVRatioCoder'), reg_class_agnostic=True, loss_cls=dict( type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), loss_bbox=dict( type='mmdet.SmoothL1Loss', beta=1.0, loss_weight=1.0), loss_fix=dict( type='mmdet.SmoothL1Loss', beta=0.3333333333333333, loss_weight=1.0), loss_ratio=dict( type='mmdet.SmoothL1Loss', beta=0.3333333333333333, loss_weight=16.0))), train_cfg=dict( rpn=dict( assigner=dict( type='mmdet.MaxIoUAssigner', pos_iou_thr=0.7, neg_iou_thr=0.3, min_pos_iou=0.3, match_low_quality=True, ignore_iof_thr=-1, iou_calculator=dict(type='QBbox2HBboxOverlaps2D')), sampler=dict( type='mmdet.RandomSampler', num=256, pos_fraction=0.5, neg_pos_ub=-1, add_gt_as_proposals=False), allowed_border=0, pos_weight=-1, debug=False), rpn_proposal=dict( nms_pre=2000, max_per_img=2000, nms=dict(type='nms', iou_threshold=0.7), min_bbox_size=0), rcnn=dict( assigner=dict( type='mmdet.MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.5, min_pos_iou=0.5, match_low_quality=False, ignore_iof_thr=-1, iou_calculator=dict(type='QBbox2HBboxOverlaps2D')), sampler=dict( type='mmdet.RandomSampler', num=512, pos_fraction=0.25, neg_pos_ub=-1, add_gt_as_proposals=True), pos_weight=-1, debug=False)), test_cfg=dict( rpn=dict( nms_pre=2000, max_per_img=2000, nms=dict(type='nms', iou_threshold=0.7), min_bbox_size=0), rcnn=dict( nms_pre=2000, min_bbox_size=0, score_thr=0.05, nms=dict(type='nms_quadri', iou_threshold=0.1), max_per_img=2000))) launcher = 'pytorch' work_dir = './work_dirs/gliding-vertex-qbox_r50_fpn_1x_dota'

S2ANet 07/27 13:14:25 - mmengine - INFO - Config: dataset_type = 'HRSCDataset' data_root = '/data_disk/ywh/datasets/HRSC2016' backend_args = None train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='RResize', img_scale=( 800, 800, )), dict( type='RRandomFlip', flip_ratio=[ 0.25, 0.25, 0.25, ], direction=[ 'horizontal', 'vertical', 'diagonal', ], version='le135'), dict( type='Normalize', mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=[ 'img', 'gt_bboxes', 'gt_labels', ]), ] val_pipeline = [ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict(type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict(type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ] test_pipeline = [ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ] train_dataloader = dict( batch_size=2, num_workers=2, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), batch_sampler=None, dataset=dict( type='HRSCDataset', data_root='/data_disk/ywh/datasets/HRSC2016', ann_file='/data_disk/ywh/datasets/HRSC2016/train_id_64shots.txt', data_prefix=dict( sub_data_root='/data_disk/ywh/datasets/HRSC2016/train_id/'), filter_cfg=dict(filter_empty_gt=True), pipeline=[ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict( type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict( type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.RandomFlip', prob=0.75, direction=[ 'horizontal', 'vertical', 'diagonal', ]), dict(type='mmdet.PackDetInputs'), ], backend_args=None)) val_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='HRSCDataset', data_root='/data_disk/ywh/datasets/HRSC2016', ann_file='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood.txt', data_prefix=dict( sub_data_root='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood/' ), test_mode=True, pipeline=[ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict( type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ], backend_args=None)) test_dataloader = dict( batch_size=1, num_workers=2, persistent_workers=True, drop_last=False, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='HRSCDataset', data_root='/data_disk/ywh/datasets/HRSC2016', ann_file='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood.txt', data_prefix=dict( sub_data_root='/data_disk/ywh/datasets/HRSC2016/trainvaltest_ood/' ), test_mode=True, pipeline=[ dict(type='mmdet.LoadImageFromFile', backend_args=None), dict(type='mmdet.Resize', scale=( 800, 512, ), keep_ratio=True), dict( type='mmdet.LoadAnnotations', with_bbox=True, box_type='qbox'), dict( type='ConvertBoxType', box_type_mapping=dict(gt_bboxes='rbox')), dict( type='mmdet.PackDetInputs', meta_keys=( 'img_id', 'img_path', 'ori_shape', 'img_shape', 'scale_factor', )), ], backend_args=None)) val_evaluator = [ dict( type='DOTAMetric', eval_mode='11points', prefix='dota_ap07', metric='mAP'), dict( type='DOTAMetric', eval_mode='area', prefix='dota_ap12', metric='mAP'), ] test_evaluator = [ dict( type='DOTAMetric', eval_mode='11points', prefix='dota_ap07', metric='mAP'), dict( type='DOTAMetric', eval_mode='area', prefix='dota_ap12', metric='mAP'), ] train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=36, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') param_scheduler = [ dict( type='LinearLR', start_factor=0.3333333333333333, by_epoch=False, begin=0, end=500), dict( type='MultiStepLR', begin=0, end=36, by_epoch=True, milestones=[ 24, 33, ], gamma=0.1), ] optim_wrapper = dict( type='OptimWrapper', optimizer=dict(type='SGD', lr=0.0025, momentum=0.9, weight_decay=0.0001), clip_grad=dict(max_norm=35, norm_type=2)) default_scope = 'mmrotate' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=50), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', interval=1), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='mmdet.DetVisualizationHook')) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( type='RotLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ], name='visualizer') log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True) log_level = 'INFO' load_from = None resume = False angle_version = 'le135' model = dict( type='S2ANet', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=( 0, 1, 2, 3, ), frozen_stages=1, zero_init_residual=False, norm_cfg=dict(type='BN', requires_grad=True), norm_eval=True, style='pytorch', init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), neck=dict( type='FPN', in_channels=[ 256, 512, 1024, 2048, ], out_channels=256, start_level=1, add_extra_convs='on_input', num_outs=5), fam_head=dict( type='RotatedRetinaHead', num_classes=1, in_channels=256, stacked_convs=2, feat_channels=256, assign_by_circumhbbox=None, anchor_generator=dict( type='RotatedAnchorGenerator', scales=[ 4, ], ratios=[ 1.0, ], strides=[ 8, 16, 32, 64, 128, ]), bbox_coder=dict( type='DeltaXYWHAOBBoxCoder', angle_range='le135', norm_factor=1, edge_swap=False, proj_xy=True, target_means=( 0.0, 0.0, 0.0, 0.0, 0.0, ), target_stds=( 1.0, 1.0, 1.0, 1.0, 1.0, )), loss_cls=dict( type='FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)), align_cfgs=dict( type='AlignConv', kernel_size=3, channels=256, featmap_strides=[ 8, 16, 32, 64, 128, ]), odm_head=dict( type='ODMRefineHead', num_classes=1, in_channels=256, stacked_convs=2, feat_channels=256, assign_by_circumhbbox=None, anchor_generator=dict( type='PseudoAnchorGenerator', strides=[ 8, 16, 32, 64, 128, ]), bbox_coder=dict( type='DeltaXYWHAOBBoxCoder', angle_range='le135', norm_factor=1, edge_swap=False, proj_xy=True, target_means=( 0.0, 0.0, 0.0, 0.0, 0.0, ), target_stds=( 1.0, 1.0, 1.0, 1.0, 1.0, )), loss_cls=dict( type='FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=1.0), loss_bbox=dict(type='SmoothL1Loss', beta=0.11, loss_weight=1.0)), train_cfg=dict( fam_cfg=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.4, min_pos_iou=0, ignore_iof_thr=-1, iou_calculator=dict(type='RBboxOverlaps2D')), allowed_border=-1, pos_weight=-1, debug=False), odm_cfg=dict( assigner=dict( type='MaxIoUAssigner', pos_iou_thr=0.5, neg_iou_thr=0.4, min_pos_iou=0, ignore_iof_thr=-1, iou_calculator=dict(type='RBboxOverlaps2D')), allowed_border=-1, pos_weight=-1, debug=False)), test_cfg=dict( nms_pre=2000, min_bbox_size=0, score_thr=0.05, nms=dict(iou_thr=0.1), max_per_img=2000)) img_norm_cfg = dict( mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], to_rgb=True) data = dict( train=dict( pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations', with_bbox=True), dict(type='RResize', img_scale=( 800, 800, )), dict( type='RRandomFlip', flip_ratio=[ 0.25, 0.25, 0.25, ], direction=[ 'horizontal', 'vertical', 'diagonal', ], version='le135'), dict( type='Normalize', mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], to_rgb=True), dict(type='Pad', size_divisor=32), dict(type='DefaultFormatBundle'), dict(type='Collect', keys=[ 'img', 'gt_bboxes', 'gt_labels', ]), ], version='le135'), val=dict(version='le135'), test=dict(version='le135')) launcher = 'pytorch' work_dir = './work_dirs/s2anet-le135_r50_fpn_3x_hrsc'

Reproduces the problem - command or script

bash ./tools/dist_train.sh ./configs/gliding_vertex/gliding-vertex-qbox_r50_fpn_1x_dota.py 2 bash ./tools/dist_train.sh ./configs/s2anet/s2anet-le135_r50_fpn_3x_hrsc.py 2

Reproduces the problem - error message

1 image when running gliding-vertex-rbox_r50_fpn_1x_dota.py

2 image when running s2anet-le135_r50_fpn_3x_hrsc.py

Additional information

I didn't change the code of train.py or the targeted model config .py file, but only changed the ./configs/base/hrsc.py to use HRSC2016 dataset. Thanks for your help.

Danny-C-Auditore avatar Jul 27 '23 05:07 Danny-C-Auditore