OBBDetection
OBBDetection copied to clipboard
FCOS with HRSC2016: different seeds, completly different results
I use the default config with backbone r50 to train FCOS model on HRSC dataset. I use trainval for training and test for testing. I don't understand why I get so different results with different random seeds. I use the flags --seed {seed} and --deterministic. These are some of the results, with different seeds:
seed | recall | AP |
---|---|---|
8 | 0.6801 | 0.5088 |
25 | 0.7171 | 0.5565 |
98 | 0.0025 | 0.0001 |
129 | 0.8611 | 0.7379 |
305 | 0.0825 | 0.0413 |
340 | 0.6145 | 0.4332 |
368 | 0.6785 | 0.4984 |
469 | 0.0025 | 0.0001 |
531 | 0.4007 | 0.2481 |
727 | 0.7214 | 0.5752 |
889 | 0 | 0 |
As you can see, I even don't converge with seed 889. How is that possible? I am checking to see if this has something to do with the weights initialization
This is the config file that I am using:
_base_ = [
'../_base_/datasets/hrsc.py',
'../_base_/schedules/schedule_3x.py',
'../../_base_/default_runtime.py'
]
# model settings
model = dict(
type='FCOSOBB',
pretrained='open-mmlab://detectron/resnet50_caffe',
backbone=dict(
type='ResNet',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
frozen_stages=1,
norm_cfg=dict(type='BN', requires_grad=False),
norm_eval=True,
style='caffe'),
neck=dict(
type='FPN',
in_channels=[256, 512, 1024, 2048],
out_channels=256,
start_level=1,
add_extra_convs=True,
extra_convs_on_inputs=False, # use P5
num_outs=5,
relu_before_extra_convs=True),
bbox_head=dict(
type='OBBFCOSHead',
num_classes=1,
in_channels=256,
stacked_convs=4,
feat_channels=256,
strides=[8, 16, 32, 64, 128],
scale_theta=True,
loss_cls=dict(
type='FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
loss_weight=1.0),
loss_bbox=dict(type='PolyIoULoss', loss_weight=1.0),
loss_centerness=dict(
type='CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0)))
# training and testing settings
train_cfg = dict(
assigner=dict(
type='MaxIoUAssigner',
pos_iou_thr=0.5,
neg_iou_thr=0.4,
min_pos_iou=0,
ignore_iof_thr=-1),
allowed_border=-1,
pos_weight=-1,
debug=False)
test_cfg = dict(
nms_pre=1000,
min_bbox_size=0,
score_thr=0.05,
nms=dict(type='obb_nms', iou_thr=0.1),
max_per_img=2000)
img_norm_cfg = dict(
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=False)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadOBBAnnotations', with_bbox=True,
with_label=True, obb_as_mask=True),
dict(type='Resize', img_scale=(1024, 1024), keep_ratio=True),
dict(type='OBBRandomFlip', h_flip_ratio=0.5, v_flip_ratio=0.5),
dict(type='Normalize', **img_norm_cfg),
dict(type='RandomOBBRotate', rotate_after_flip=True,
angles=(0, 0), vert_rate=0.5, vert_cls=['roundabout', 'storage-tank']),
dict(type='Pad', size_divisor=32),
#dict(type='DOTASpecialIgnore', ignore_size=2),
dict(type='FliterEmpty'),
dict(type='Mask2OBB', obb_type='obb'),
dict(type='OBBDefaultFormatBundle'),
dict(type='OBBCollect', keys=['img', 'gt_bboxes', 'gt_obboxes', 'gt_labels'])
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(
type='MultiScaleFlipRotateAug',
img_scale=[(1024, 1024)],
h_flip=False,
v_flip=False,
rotate=False,
transforms=[
dict(type='Resize', keep_ratio=True),
dict(type='OBBRandomFlip'),
dict(type='Normalize', **img_norm_cfg),
dict(type='RandomOBBRotate', rotate_after_flip=True),
dict(type='Pad', size_divisor=32),
dict(type='ImageToTensor', keys=['img']),
dict(type='OBBCollect', keys=['img']),
])
]
data = dict(
samples_per_gpu=2,
workers_per_gpu=4,
train=dict(pipeline=train_pipeline),
val=dict(pipeline=test_pipeline),
test=dict(pipeline=test_pipeline))
# optimizer
optimizer = dict(
lr=0.0025, paramwise_cfg=dict(bias_lr_mult=2., bias_decay_mult=0.))
optimizer_config = dict(
_delete_=True, grad_clip=dict(max_norm=35, norm_type=2))
# learning policy
lr_config = dict(
policy='step',
warmup='constant',
warmup_iters=500,
warmup_ratio=1.0 / 3,
step=[24, 33])
total_epochs = 36
i have meet the same problem too, did u solved this problem?
I did not solve it. To get performances I executed several experiments and used the median. I cannot explain why this happens...