"RuntimeError: The training pipeline of the dataset wrapper always return None" while using albu and MultiImageMixDataset
Thanks for your error report and we appreciate it a lot.
Checklist
- I have searched related issues but cannot get the expected help.
- I have read the FAQ documentation but cannot get the expected help.
- The bug has not been fixed in the latest version.
Describe the bug When I use Albumentation and Mosaic as data augmentation in MMDetectoin 3.3.0, RuntimeError was reported.
Reproduction
a part of cofig is as follow:
train_pipeline = [
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='LoadAnnotations', with_bbox=True),
dict(type='Mosaic',img_scale=(1280, 1280),pad_val=114.0,bbox_clip_border=False),
dict(type='RandomAffine', scaling_ratio_range=(0.1, 2), border=(-640, -640)),
dict(type='MixUp',img_scale=(1280, 1280),ratio_range=(0.8, 1.6),pad_val=114.0,bbox_clip_border=False),
dict(type='RandomChoiceResize',
scales=[(960, 960), (1024, 1024), (1088, 1088), (1152, 1152), (1216, 1216), (1280, 1280)],
keep_ratio=True),
dict(type='Albu',
transforms=albu_train_transforms,
bbox_params=dict(
type='BboxParams',
format='pascal_voc',
label_fields=['gt_bboxes_labels', 'gt_ignore_flags'],
min_visibility=0.0,
filter_lost_elements=True),
keymap={
'img': 'image',
# 'gt_masks': 'masks',
'gt_bboxes': 'bboxes'
},
skip_img_without_anno=True),
dict(type='RandomGrayscale', prob=0.1, keep_channels=True),
dict(type='RandomFlip', prob=0.5, direction=['horizontal', 'vertical']),
dict(type='PackDetInputs')
]
train_dataloader = dict(
batch_size=8,
num_workers=16,
dataset = dict(
_delete_=True,
# use MultiImageMixDataset wrapper to support mosaic and mixup
type='MultiImageMixDataset',
dataset=dict(
type='CocoDataset',
data_root='data/Fisheye8K/',
ann_file='train/train.json',
data_prefix=dict(img='train/images/'),
pipeline=[
dict(type='LoadImageFromFile', backend_args=backend_args),
dict(type='LoadAnnotations', with_bbox=True)
],
filter_cfg=dict(filter_empty_gt=False, min_size=32),
backend_args=backend_args),
pipeline=train_pipeline)
)
Error traceback
I find this error is preduced in line151 of dataset_wrappers.py. It is caused by that updated_results is always None.
What's more, I inspect function transform() of class Albu() in transforms.py,the size of gt_bboxes in the input results is 0:
I also find similiar issues: "TypeError: argument of type 'NoneType' is not iterable" when use mixup and alub augmentation together #7746 [Fix]fix the bug that mix_results may be None #7530
How could I solve this problem?
did you ever find a solution?