mmsegmentation icon indicating copy to clipboard operation
mmsegmentation copied to clipboard

Why did I get low results on the voc dataset

Open cxmgs111qq opened this issue 1 year ago • 0 comments

I use deeplab3plus on voc2012 dataset,Is it normal to train 50 epoch, but miou is not yet 40? log file as follows

2023/08/03 21:27:21 - mmengine - INFO -

System environment: sys.platform: win32 Python: 3.8.16 (default, Mar 2 2023, 03:18:16) [MSC v.1916 64 bit (AMD64)] CUDA available: True numpy_random_seed: 764017440 GPU 0: NVIDIA GeForce RTX 3060 CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7 NVCC: Cuda compilation tools, release 11.7, V11.7.99 MSVC: 用于 x64 的 Microsoft (R) C/C++ 优化编译器 19.36.32535 版 GCC: n/a PyTorch: 1.10.0 PyTorch compiling details: PyTorch built with:

  • C++ Version: 199711

  • MSVC 192829337

  • Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications

  • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)

  • OpenMP 2019

  • LAPACK is enabled (usually provided by MKL)

  • CPU capability usage: AVX2

  • CUDA Runtime 11.3

  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37

  • CuDNN 8.2

  • Magma 2.5.4

  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=C:/cb/pytorch_1000000000000/work/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/cb/pytorch_1000000000000/work/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON,

    TorchVision: 0.11.0 OpenCV: 4.7.0 MMEngine: 0.7.3

Runtime environment: cudnn_benchmark: True mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: None Distributed launcher: none Distributed training: False GPU number: 1

2023/08/03 21:27:21 - mmengine - INFO - Config: norm_cfg = dict(type='BN', requires_grad=True) data_preprocessor = dict( type='SegDataPreProcessor', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], bgr_to_rgb=True, pad_val=0, seg_pad_val=255, size=(512, 512)) model = dict( type='EncoderDecoder', data_preprocessor=dict( type='SegDataPreProcessor', mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], bgr_to_rgb=True, pad_val=0, seg_pad_val=255, size=(512, 512)), pretrained='open-mmlab://resnet50_v1c', backbone=dict( type='ResNetV1c', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), dilations=(1, 1, 2, 4), strides=(1, 2, 1, 1), norm_cfg=dict(type='BN', requires_grad=True), norm_eval=False, style='pytorch', contract_dilation=True), decode_head=dict( type='DepthwiseSeparableASPPHead', in_channels=2048, in_index=3, channels=512, dilations=(1, 12, 24, 36), c1_in_channels=256, c1_channels=48, dropout_ratio=0.1, num_classes=21, norm_cfg=dict(type='BN', requires_grad=True), align_corners=False, loss_decode=[ dict(type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4), dict(type='DiceLoss', use_sigmoid=False, loss_weight=1.0) ]), auxiliary_head=dict( type='FCNHead', in_channels=1024, in_index=2, channels=256, num_convs=1, concat_input=False, dropout_ratio=0.1, num_classes=21, norm_cfg=dict(type='BN', requires_grad=True), align_corners=False, loss_decode=[ dict(type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4) ]), train_cfg=dict(), test_cfg=dict(mode='whole')) dataset_type = 'PascalVOCDataset' data_root = '../myProj/voc2012' crop_size = (512, 512) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( type='RandomResize', scale=(2048, 512), ratio_range=(0.5, 2.0), keep_ratio=True), dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75), dict(type='RandomFlip', prob=0.5), dict(type='PhotoMetricDistortion'), dict(type='PackSegInputs') ] test_pipeline = [ dict(type='LoadImageFromFile'), dict(type='Resize', scale=(2048, 512), keep_ratio=True), dict(type='LoadAnnotations'), dict(type='PackSegInputs') ] img_ratios = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75] tta_pipeline = [ dict(type='LoadImageFromFile', backend_args=None), dict( type='TestTimeAug', transforms=[[{ 'type': 'Resize', 'scale_factor': 0.5, 'keep_ratio': True }, { 'type': 'Resize', 'scale_factor': 0.75, 'keep_ratio': True }, { 'type': 'Resize', 'scale_factor': 1.0, 'keep_ratio': True }, { 'type': 'Resize', 'scale_factor': 1.25, 'keep_ratio': True }, { 'type': 'Resize', 'scale_factor': 1.5, 'keep_ratio': True }, { 'type': 'Resize', 'scale_factor': 1.75, 'keep_ratio': True }], [{ 'type': 'RandomFlip', 'prob': 0.0, 'direction': 'horizontal' }, { 'type': 'RandomFlip', 'prob': 1.0, 'direction': 'horizontal' }], [{ 'type': 'LoadAnnotations' }], [{ 'type': 'PackSegInputs' }]]) ] train_dataloader = dict( batch_size=4, num_workers=4, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='PascalVOCDataset', data_root='../myProj/voc2012', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), ann_file='ImageSets/Segmentation/train.txt', pipeline=[ dict(type='LoadImageFromFile'), dict(type='LoadAnnotations'), dict( type='RandomResize', scale=(2048, 512), ratio_range=(0.5, 2.0), keep_ratio=True), dict(type='RandomCrop', crop_size=(512, 512), cat_max_ratio=0.75), dict(type='RandomFlip', prob=0.5), dict(type='PhotoMetricDistortion'), dict(type='PackSegInputs') ])) val_dataloader = dict( batch_size=1, num_workers=4, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PascalVOCDataset', data_root='../myProj/voc2012', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), ann_file='ImageSets/Segmentation/val.txt', pipeline=[ dict(type='LoadImageFromFile'), dict(type='Resize', scale=(2048, 512), keep_ratio=True), dict(type='LoadAnnotations'), dict(type='PackSegInputs') ])) test_dataloader = dict( batch_size=1, num_workers=4, persistent_workers=True, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type='PascalVOCDataset', data_root='../myProj/voc2012', data_prefix=dict( img_path='JPEGImages', seg_map_path='SegmentationClass'), ann_file='ImageSets/Segmentation/val.txt', pipeline=[ dict(type='LoadImageFromFile'), dict(type='Resize', scale=(2048, 512), keep_ratio=True), dict(type='LoadAnnotations'), dict(type='PackSegInputs') ])) val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mFscore']) test_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU', 'mFscore']) default_scope = 'mmseg' env_cfg = dict( cudnn_benchmark=True, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [dict(type='LocalVisBackend')] visualizer = dict( type='SegLocalVisualizer', vis_backends=[dict(type='LocalVisBackend')], name='visualizer') log_processor = dict(by_epoch=True) log_level = 'INFO' load_from = None resume = False tta_model = dict(type='SegTTAModel') optimizer = dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005) optim_wrapper = dict( type='OptimWrapper', optimizer=dict(type='SGD', lr=0.01, momentum=0.9, weight_decay=0.0005), clip_grad=None) param_scheduler = [ dict( type='PolyLR', eta_min=0.0001, power=0.9, begin=0, end=20000, by_epoch=True) ] train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=50, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop') default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=500, log_metric_by_epoch=True), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict(type='CheckpointHook', by_epoch=True, interval=50), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='SegVisualizationHook')) launcher = 'none' work_dir = '../myProj/voc2012/TrainResult/deeplabv3/ep50-1dice0.4ce+0.4ce'

2023/08/03 21:27:23 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used. 2023/08/03 21:27:23 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook

before_train: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook

before_train_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook

before_train_iter: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook

after_train_iter: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) SegVisualizationHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_train_epoch: (NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

before_val_epoch: (NORMAL ) IterTimerHook

before_val_iter: (NORMAL ) IterTimerHook

after_val_iter: (NORMAL ) IterTimerHook
(NORMAL ) SegVisualizationHook
(BELOW_NORMAL) LoggerHook

after_val_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_train: (VERY_LOW ) CheckpointHook

before_test_epoch: (NORMAL ) IterTimerHook

before_test_iter: (NORMAL ) IterTimerHook

after_test_iter: (NORMAL ) IterTimerHook
(NORMAL ) SegVisualizationHook
(BELOW_NORMAL) LoggerHook

after_test_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook

after_run: (BELOW_NORMAL) LoggerHook

2023/08/03 21:27:23 - mmengine - WARNING - The prefix is not set in metric class IoUMetric. 2023/08/03 21:27:24 - mmengine - INFO - load model from: open-mmlab://resnet50_v1c 2023/08/03 21:27:24 - mmengine - INFO - Loads checkpoint by openmmlab backend from path: open-mmlab://resnet50_v1c 2023/08/03 21:27:24 - mmengine - WARNING - The model and loaded state dict do not match exactly

unexpected key in source state_dict: fc.weight, fc.bias

Name of parameter - Initialization information

backbone.stem.0.weight - torch.Size([32, 3, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.1.weight - torch.Size([32]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.1.bias - torch.Size([32]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.3.weight - torch.Size([32, 32, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.4.weight - torch.Size([32]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.4.bias - torch.Size([32]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.6.weight - torch.Size([64, 32, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.7.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.stem.7.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.conv1.weight - torch.Size([64, 64, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.bn1.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.bn1.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.conv2.weight - torch.Size([64, 64, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.bn2.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.bn2.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.conv3.weight - torch.Size([256, 64, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.bn3.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.bn3.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.downsample.0.weight - torch.Size([256, 64, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.downsample.1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.0.downsample.1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.conv1.weight - torch.Size([64, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.bn1.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.bn1.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.conv2.weight - torch.Size([64, 64, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.bn2.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.bn2.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.conv3.weight - torch.Size([256, 64, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.bn3.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.1.bn3.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.conv1.weight - torch.Size([64, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.bn1.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.bn1.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.conv2.weight - torch.Size([64, 64, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.bn2.weight - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.bn2.bias - torch.Size([64]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.conv3.weight - torch.Size([256, 64, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.bn3.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer1.2.bn3.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.conv1.weight - torch.Size([128, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.bn1.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.bn1.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.conv2.weight - torch.Size([128, 128, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.bn2.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.bn2.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.conv3.weight - torch.Size([512, 128, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.bn3.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.bn3.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.downsample.0.weight - torch.Size([512, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.downsample.1.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.0.downsample.1.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.conv1.weight - torch.Size([128, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.bn1.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.bn1.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.conv2.weight - torch.Size([128, 128, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.bn2.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.bn2.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.conv3.weight - torch.Size([512, 128, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.bn3.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.1.bn3.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.conv1.weight - torch.Size([128, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.bn1.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.bn1.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.conv2.weight - torch.Size([128, 128, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.bn2.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.bn2.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.conv3.weight - torch.Size([512, 128, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.bn3.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.2.bn3.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.conv1.weight - torch.Size([128, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.bn1.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.bn1.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.conv2.weight - torch.Size([128, 128, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.bn2.weight - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.bn2.bias - torch.Size([128]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.conv3.weight - torch.Size([512, 128, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.bn3.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer2.3.bn3.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.conv1.weight - torch.Size([256, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.bn1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.bn1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.conv2.weight - torch.Size([256, 256, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.bn2.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.bn2.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.conv3.weight - torch.Size([1024, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.bn3.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.bn3.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.downsample.0.weight - torch.Size([1024, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.downsample.1.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.0.downsample.1.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.conv1.weight - torch.Size([256, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.bn1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.bn1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.conv2.weight - torch.Size([256, 256, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.bn2.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.bn2.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.conv3.weight - torch.Size([1024, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.bn3.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.1.bn3.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.conv1.weight - torch.Size([256, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.bn1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.bn1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.conv2.weight - torch.Size([256, 256, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.bn2.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.bn2.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.conv3.weight - torch.Size([1024, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.bn3.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.2.bn3.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.conv1.weight - torch.Size([256, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.bn1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.bn1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.conv2.weight - torch.Size([256, 256, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.bn2.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.bn2.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.conv3.weight - torch.Size([1024, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.bn3.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.3.bn3.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.conv1.weight - torch.Size([256, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.bn1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.bn1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.conv2.weight - torch.Size([256, 256, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.bn2.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.bn2.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.conv3.weight - torch.Size([1024, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.bn3.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.4.bn3.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.conv1.weight - torch.Size([256, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.bn1.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.bn1.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.conv2.weight - torch.Size([256, 256, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.bn2.weight - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.bn2.bias - torch.Size([256]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.conv3.weight - torch.Size([1024, 256, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.bn3.weight - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer3.5.bn3.bias - torch.Size([1024]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.conv1.weight - torch.Size([512, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.bn1.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.bn1.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.conv2.weight - torch.Size([512, 512, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.bn2.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.bn2.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.conv3.weight - torch.Size([2048, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.bn3.weight - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.bn3.bias - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.downsample.0.weight - torch.Size([2048, 1024, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.downsample.1.weight - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.0.downsample.1.bias - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.conv1.weight - torch.Size([512, 2048, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.bn1.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.bn1.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.conv2.weight - torch.Size([512, 512, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.bn2.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.bn2.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.conv3.weight - torch.Size([2048, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.bn3.weight - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.1.bn3.bias - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.conv1.weight - torch.Size([512, 2048, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.bn1.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.bn1.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.conv2.weight - torch.Size([512, 512, 3, 3]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.bn2.weight - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.bn2.bias - torch.Size([512]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.conv3.weight - torch.Size([2048, 512, 1, 1]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.bn3.weight - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

backbone.layer4.2.bn3.bias - torch.Size([2048]): PretrainedInit: load from open-mmlab://resnet50_v1c

decode_head.conv_seg.weight - torch.Size([21, 512, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

decode_head.conv_seg.bias - torch.Size([21]): NormalInit: mean=0, std=0.01, bias=0

decode_head.image_pool.1.conv.weight - torch.Size([512, 2048, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.image_pool.1.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.image_pool.1.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.0.conv.weight - torch.Size([512, 2048, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.0.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.0.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.1.depthwise_conv.conv.weight - torch.Size([2048, 1, 3, 3]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.1.depthwise_conv.bn.weight - torch.Size([2048]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.1.depthwise_conv.bn.bias - torch.Size([2048]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.1.pointwise_conv.conv.weight - torch.Size([512, 2048, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.1.pointwise_conv.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.1.pointwise_conv.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.2.depthwise_conv.conv.weight - torch.Size([2048, 1, 3, 3]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.2.depthwise_conv.bn.weight - torch.Size([2048]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.2.depthwise_conv.bn.bias - torch.Size([2048]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.2.pointwise_conv.conv.weight - torch.Size([512, 2048, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.2.pointwise_conv.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.2.pointwise_conv.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.3.depthwise_conv.conv.weight - torch.Size([2048, 1, 3, 3]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.3.depthwise_conv.bn.weight - torch.Size([2048]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.3.depthwise_conv.bn.bias - torch.Size([2048]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.3.pointwise_conv.conv.weight - torch.Size([512, 2048, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.3.pointwise_conv.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.aspp_modules.3.pointwise_conv.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.bottleneck.conv.weight - torch.Size([512, 2560, 3, 3]): Initialized by user-defined init_weights in ConvModule

decode_head.bottleneck.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.bottleneck.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.c1_bottleneck.conv.weight - torch.Size([48, 256, 1, 1]): Initialized by user-defined init_weights in ConvModule

decode_head.c1_bottleneck.bn.weight - torch.Size([48]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.c1_bottleneck.bn.bias - torch.Size([48]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.0.depthwise_conv.conv.weight - torch.Size([560, 1, 3, 3]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.0.depthwise_conv.bn.weight - torch.Size([560]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.0.depthwise_conv.bn.bias - torch.Size([560]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.0.pointwise_conv.conv.weight - torch.Size([512, 560, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.0.pointwise_conv.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.0.pointwise_conv.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.1.depthwise_conv.conv.weight - torch.Size([512, 1, 3, 3]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.1.depthwise_conv.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.1.depthwise_conv.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.1.pointwise_conv.conv.weight - torch.Size([512, 512, 1, 1]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.1.pointwise_conv.bn.weight - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

decode_head.sep_bottleneck.1.pointwise_conv.bn.bias - torch.Size([512]): The value is the same before and after calling init_weights of EncoderDecoder

auxiliary_head.conv_seg.weight - torch.Size([21, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

auxiliary_head.conv_seg.bias - torch.Size([21]): NormalInit: mean=0, std=0.01, bias=0

auxiliary_head.convs.0.conv.weight - torch.Size([256, 1024, 3, 3]): The value is the same before and after calling init_weights of EncoderDecoder

auxiliary_head.convs.0.bn.weight - torch.Size([256]): The value is the same before and after calling init_weights of EncoderDecoder

auxiliary_head.convs.0.bn.bias - torch.Size([256]): The value is the same before and after calling init_weights of EncoderDecoder
2023/08/03 21:27:24 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 2023/08/03 21:27:24 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 2023/08/03 21:27:24 - mmengine - INFO - Checkpoints will be saved to W:\PythonWork\mmseg\myProj\voc2012\TrainResult\deeplabv3\ep50-1dice0.4ce+0.4ce. 2023/08/03 21:33:25 - mmengine - INFO - Exp name: cxm-deeplabv3+-r50-voc2012-test_20230803_212720 2023/08/03 21:33:25 - mmengine - INFO - Epoch(train) [1][366/366] lr: 1.0000e-02 eta: 4:54:51 time: 0.9716 data_time: 0.0019 memory: 8370 loss: 1.5828 decode.loss_ce: 0.7918 decode.loss_dice: 0.2687 decode.acc_seg: 76.8677 aux.loss_ce: 0.5223 aux.acc_seg: 72.6645

2023/08/04 04:06:06 - mmengine - INFO - Epoch(train) [50][366/366] lr: 9.9782e-03 eta: 0:00:00 time: 0.9127 data_time: 0.0014 memory: 7889 loss: 0.6733 decode.loss_ce: 0.2024 decode.loss_dice: 0.1679 decode.acc_seg: 66.7279 aux.loss_ce: 0.3029 aux.acc_seg: 66.5489 2023/08/04 04:06:06 - mmengine - INFO - Saving checkpoint at 50 epochs 2023/08/04 04:06:55 - mmengine - INFO - Epoch(val) [50][ 500/1449] eta: 0:01:28 time: 0.1021 data_time: 0.0006 memory: 1442
2023/08/04 04:07:41 - mmengine - INFO - Epoch(val) [50][1000/1449] eta: 0:00:42 time: 0.0918 data_time: 0.0004 memory: 1138
2023/08/04 04:08:23 - mmengine - INFO - per class results: 2023/08/04 04:08:23 - mmengine - INFO - +-------------+-------+-------+--------+-----------+--------+ | Class | IoU | Acc | Fscore | Precision | Recall | +-------------+-------+-------+--------+-----------+--------+ | background | 77.8 | 82.7 | 87.51 | 92.91 | 82.7 | | aeroplane | 54.26 | 57.91 | 70.35 | 89.59 | 57.91 | | bicycle | 0.0 | 0.0 | nan | nan | 0.0 | | bird | 23.6 | 48.49 | 38.19 | 31.5 | 48.49 | | boat | 28.83 | 53.01 | 44.76 | 38.73 | 53.01 | | bottle | 28.6 | 48.18 | 44.47 | 41.3 | 48.18 | | bus | 49.62 | 53.28 | 66.33 | 87.85 | 53.28 | | car | 58.71 | 65.32 | 73.98 | 85.3 | 65.32 | | cat | 44.17 | 70.17 | 61.28 | 54.38 | 70.17 | | chair | 8.68 | 17.73 | 15.97 | 14.54 | 17.73 | | cow | 32.0 | 37.19 | 48.48 | 69.61 | 37.19 | | diningtable | 30.69 | 43.03 | 46.97 | 51.69 | 43.03 | | dog | 25.3 | 36.53 | 40.39 | 45.16 | 36.53 | | horse | 34.13 | 74.4 | 50.89 | 38.68 | 74.4 | | motorbike | 48.17 | 81.68 | 65.02 | 54.0 | 81.68 | | person | 50.94 | 58.68 | 67.5 | 79.43 | 58.68 | | pottedplant | 12.64 | 63.87 | 22.44 | 13.61 | 63.87 | | sheep | 21.37 | 87.08 | 35.22 | 22.07 | 87.08 | | sofa | 16.65 | 63.26 | 28.55 | 18.44 | 63.26 | | train | 46.01 | 83.43 | 63.02 | 50.64 | 83.43 | | tvmonitor | 32.78 | 61.53 | 49.38 | 41.23 | 61.53 | +-------------+-------+-------+--------+-----------+--------+ 2023/08/04 04:08:23 - mmengine - INFO - Epoch(val) [50][1449/1449] aAcc: 76.0500 mIoU: 34.5200 mAcc: 56.5500 mFscore: 51.0400 mPrecision: 51.0300 mRecall: 56.5500 data_time: 0.0005 time: 0.0933

cxmgs111qq avatar Dec 15 '23 07:12 cxmgs111qq