mmpretrain icon indicating copy to clipboard operation
mmpretrain copied to clipboard

[Bug] AttributeError: 'DataSample' object has no attribute 'gt_label'

Open baiguosummer opened this issue 1 year ago • 3 comments

Branch

main branch (mmpretrain version)

Describe the bug

I created my own Yolov5Data dataset in the format of the coco dataset in mmyolo, and then used it to train the ResNet model, and reported an error: AttributeError: 'DataSample' object has no attribute 'gt_label'

(torch) panda@amd:/media/panda/nvme2T/pycharm/openMMlab$ bash mmpretrain/sh/4model_train.sh 08/04 16:01:48 - mmengine - INFO -

System environment: sys.platform: linux Python: 3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0] CUDA available: True numpy_random_seed: 1590878708 GPU 0: NVIDIA GeForce RTX 3080 CUDA_HOME: /usr/local/cuda-11.3 NVCC: Cuda compilation tools, release 11.3, V11.3.58 GCC: gcc (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0 PyTorch: 1.10.1 PyTorch compiling details: PyTorch built with:

  • GCC 7.3

  • C++ Version: 201402

  • Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications

  • Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)

  • OpenMP 201511 (a.k.a. OpenMP 4.5)

  • LAPACK is enabled (usually provided by MKL)

  • NNPACK is enabled

  • CPU capability usage: AVX2

  • CUDA Runtime 11.3

  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37

  • CuDNN 8.2

  • Magma 2.5.2

  • Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

    TorchVision: 0.11.2 OpenCV: 4.6.0 MMEngine: 0.8.0

Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: 1590878708 deterministic: False Distributed launcher: none Distributed training: False GPU number: 1

08/04 16:01:48 - mmengine - INFO - Config: model = dict( type='ImageClassifier', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(3, ), style='pytorch', frozen_stages=-1), neck=dict(type='GlobalAveragePooling'), head=dict( type='LinearClsHead', num_classes=6, in_channels=2048, loss=dict(type='CrossEntropyLoss', loss_weight=1.0), topk=( 1, 5, )), init_cfg=dict( type='Pretrained', checkpoint= 'mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth')) dataset_type = 'mmdet.CocoDataset' data_preprocessor = dict( num_classes=6, mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='RandomResizedCrop', scale=224), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs'), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict(type='ResizeEdge', scale=256, edge='short'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs'), ] train_dataloader = dict( pin_memory=True, persistent_workers=True, collate_fn=dict(type='default_collate'), batch_size=8, num_workers=8, dataset=dict( type='mmdet.CocoDataset', data_root='data/mmyolo/Yolov5Data/', metainfo=dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]), ann_file='annotations/trainval.json', data_prefix=dict(img='images/'), pipeline=[ dict(type='LoadImageFromFile'), dict(type='RandomResizedCrop', scale=224), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs'), ]), sampler=dict(type='DefaultSampler', shuffle=True)) val_dataloader = dict( pin_memory=True, persistent_workers=True, collate_fn=dict(type='default_collate'), batch_size=32, num_workers=5, dataset=dict( type='mmdet.CocoDataset', metainfo=dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]), data_root='data/mmyolo/Yolov5Data/', ann_file='annotations/test.json', data_prefix=dict(img='images/'), pipeline=[ dict(type='LoadImageFromFile'), dict(type='ResizeEdge', scale=256, edge='short'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs'), ]), sampler=dict(type='DefaultSampler', shuffle=False)) val_evaluator = dict( type='Accuracy', topk=( 1, 5, )) test_dataloader = dict( pin_memory=True, persistent_workers=True, collate_fn=dict(type='default_collate'), batch_size=32, num_workers=5, dataset=dict( type='mmdet.CocoDataset', metainfo=dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]), data_root='data/mmyolo/Yolov5Data/', ann_file='annotations/test.json', data_prefix=dict(img='images/'), pipeline=[ dict(type='LoadImageFromFile'), dict(type='ResizeEdge', scale=256, edge='short'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs'), ]), sampler=dict(type='DefaultSampler', shuffle=False)) test_evaluator = dict( type='Accuracy', topk=( 1, 5, )) optim_wrapper = dict( optimizer=dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)) param_scheduler = dict( type='MultiStepLR', by_epoch=True, milestones=[ 30, 60, 90, ], gamma=0.1) train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=2, val_begin=4) val_cfg = dict() test_cfg = dict() auto_scale_lr = dict(base_batch_size=256) default_scope = 'mmpretrain' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=10), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict( type='CheckpointHook', interval=2, max_keep_ckpts=4, save_best='auto'), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='VisualizationHook', enable=False)) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( type='UniversalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend'), ]) log_level = 'INFO' load_from = 'mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth' resume = False randomness = dict(seed=None, deterministic=False) max_epochs = 100 data_root = 'data/mmyolo/Yolov5Data/' work_dir = 'mmpretrain/work_train_dir/resnet50_in1k_c6_Yolov5Data' train_batch_size_per_gpu = 8 train_num_workers = 8 save_epoch_intervals = 2 class_name = [ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ] num_classes = 6 metainfo = dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]) launcher = 'none'

08/04 16:01:50 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used. 08/04 16:01:50 - mmengine - INFO - Hooks will be executed in the following order: before_run: (VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook

before_train: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook

before_train_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook

before_train_iter: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook

after_train_iter: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_train_epoch: (NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

before_val_epoch: (NORMAL ) IterTimerHook

before_val_iter: (NORMAL ) IterTimerHook

after_val_iter: (NORMAL ) IterTimerHook
(NORMAL ) VisualizationHook
(BELOW_NORMAL) LoggerHook

after_val_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook

after_train: (VERY_LOW ) CheckpointHook

before_test_epoch: (NORMAL ) IterTimerHook

before_test_iter: (NORMAL ) IterTimerHook

after_test_iter: (NORMAL ) IterTimerHook
(NORMAL ) VisualizationHook
(BELOW_NORMAL) LoggerHook

after_test_epoch: (VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook

after_run: (BELOW_NORMAL) LoggerHook

loading annotations into memory... Done (t=0.27s) creating index... index created! loading annotations into memory... Done (t=0.04s) creating index... index created! 08/04 16:01:52 - mmengine - INFO - load model from: mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth 08/04 16:01:52 - mmengine - INFO - Loads checkpoint by local backend from path: mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth 08/04 16:01:52 - mmengine - WARNING - The model and loaded state dict do not match exactly

size mismatch for head.fc.weight: copying a param with shape torch.Size([1000, 2048]) from checkpoint, the shape in current model is torch.Size([6, 2048]). size mismatch for head.fc.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([6]). Loads checkpoint by local backend from path: mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth The model and loaded state dict do not match exactly

size mismatch for head.fc.weight: copying a param with shape torch.Size([1000, 2048]) from checkpoint, the shape in current model is torch.Size([6, 2048]). size mismatch for head.fc.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([6]). 08/04 16:01:52 - mmengine - INFO - Load checkpoint from mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth 08/04 16:01:52 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io 08/04 16:01:52 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future. 08/04 16:01:52 - mmengine - INFO - Checkpoints will be saved to /media/panda/nvme2T/pycharm/openMMlab/mmpretrain/work_train_dir/resnet50_in1k_c6_Yolov5Data. Traceback (most recent call last): File "mmpretrain/tools/train.py", line 166, in main() File "mmpretrain/tools/train.py", line 162, in main runner.train() File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1735, in train model = self.train_loop.run() # type: ignore File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run self.run_epoch() File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch self.run_iter(idx, data_batch) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter outputs = self.runner.model.train_step( File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step losses = self._run_forward(data, mode='loss') # type: ignore File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmengine/model/base_model/base_model.py", line 340, in _run_forward results = self(**data, mode=mode) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmpretrain/models/classifiers/image.py", line 122, in forward return self.loss(inputs, data_samples) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmpretrain/models/classifiers/image.py", line 232, in loss return self.head.loss(feats, data_samples) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmpretrain/models/heads/cls_head.py", line 80, in loss losses = self._get_loss(cls_score, data_samples, **kwargs) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmpretrain/models/heads/cls_head.py", line 91, in _get_loss target = torch.cat([i.gt_label for i in data_samples]) File "/home/panda/anaconda3/envs/torch/lib/python3.8/site-packages/mmpretrain/models/heads/cls_head.py", line 91, in target = torch.cat([i.gt_label for i in data_samples]) AttributeError: 'DataSample' object has no attribute 'gt_label'

Environment

this is my model config

(torch) panda@amd:/media/panda/nvme2T/pycharm/openMMlab$ bash mmpretrain/sh/print_config.sh model = dict( type='ImageClassifier', backbone=dict( type='ResNet', depth=50, num_stages=4, out_indices=(3, ), style='pytorch', frozen_stages=-1), neck=dict(type='GlobalAveragePooling'), head=dict( type='LinearClsHead', num_classes=6, in_channels=2048, loss=dict(type='CrossEntropyLoss', loss_weight=1.0), topk=( 1, 5, )), init_cfg=dict( type='Pretrained', checkpoint= 'mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth')) dataset_type = 'mmdet.CocoDataset' data_preprocessor = dict( num_classes=6, mean=[ 123.675, 116.28, 103.53, ], std=[ 58.395, 57.12, 57.375, ], to_rgb=True) train_pipeline = [ dict(type='LoadImageFromFile'), dict(type='RandomResizedCrop', scale=224), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs'), ] test_pipeline = [ dict(type='LoadImageFromFile'), dict(type='ResizeEdge', scale=256, edge='short'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs'), ] train_dataloader = dict( batch_size=8, num_workers=8, dataset=dict( type='mmdet.CocoDataset', data_root='data/mmyolo/Yolov5Data/', metainfo=dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]), ann_file='annotations/trainval.json', data_prefix=dict(img='images/'), pipeline=[ dict(type='LoadImageFromFile'), dict(type='RandomResizedCrop', scale=224), dict(type='RandomFlip', prob=0.5, direction='horizontal'), dict(type='PackInputs'), ]), sampler=dict(type='DefaultSampler', shuffle=True)) val_dataloader = dict( batch_size=32, num_workers=5, dataset=dict( type='mmdet.CocoDataset', metainfo=dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]), data_root='data/mmyolo/Yolov5Data/', ann_file='annotations/test.json', data_prefix=dict(img='images/'), pipeline=[ dict(type='LoadImageFromFile'), dict(type='ResizeEdge', scale=256, edge='short'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs'), ]), sampler=dict(type='DefaultSampler', shuffle=False)) val_evaluator = dict( type='Accuracy', topk=( 1, 5, )) test_dataloader = dict( batch_size=32, num_workers=5, dataset=dict( type='mmdet.CocoDataset', metainfo=dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ]), data_root='data/mmyolo/Yolov5Data/', ann_file='annotations/test.json', data_prefix=dict(img='images/'), pipeline=[ dict(type='LoadImageFromFile'), dict(type='ResizeEdge', scale=256, edge='short'), dict(type='CenterCrop', crop_size=224), dict(type='PackInputs'), ]), sampler=dict(type='DefaultSampler', shuffle=False)) test_evaluator = dict( type='Accuracy', topk=( 1, 5, )) optim_wrapper = dict( optimizer=dict(type='SGD', lr=0.1, momentum=0.9, weight_decay=0.0001)) param_scheduler = dict( type='MultiStepLR', by_epoch=True, milestones=[ 30, 60, 90, ], gamma=0.1) train_cfg = dict(by_epoch=True, max_epochs=100, val_interval=2, val_begin=4) val_cfg = dict() test_cfg = dict() auto_scale_lr = dict(base_batch_size=256) default_scope = 'mmpretrain' default_hooks = dict( timer=dict(type='IterTimerHook'), logger=dict(type='LoggerHook', interval=10), param_scheduler=dict(type='ParamSchedulerHook'), checkpoint=dict( type='CheckpointHook', interval=2, max_keep_ckpts=4, save_best='auto'), sampler_seed=dict(type='DistSamplerSeedHook'), visualization=dict(type='VisualizationHook', enable=False)) env_cfg = dict( cudnn_benchmark=False, mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0), dist_cfg=dict(backend='nccl')) vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( type='UniversalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), dict(type='TensorboardVisBackend'), ]) log_level = 'INFO' load_from = 'mmpretrain/checkpoints/resnet50_8xb32_in1k_20210831-ea4938fc.pth' resume = False randomness = dict(seed=None, deterministic=False) max_epochs = 100 data_root = 'data/mmyolo/Yolov5Data/' work_dir = 'mmpretrain/work_train_dir/resnet50_in1k_c6_Yolov5Data' train_batch_size_per_gpu = 8 train_num_workers = 8 save_epoch_intervals = 2 class_name = [ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ] num_classes = 6 metainfo = dict( classes=[ 'bulldozer', 'car', 'excavator', 'hole', 'person', 'truck', ], palette=[ ( 20, 220, 20, ), ( 20, 20, 240, ), ( 220, 20, 20, ), ( 40, 100, 150, ), ( 200, 50, 120, ), ( 200, 150, 150, ), ])

Other information

No response

baiguosummer avatar Aug 04 '23 08:08 baiguosummer

I got same problem, if you do the supervised task, you can try set "with_label=True" in your config file, this works to me.Good luck!

LlemoningL avatar Nov 08 '23 08:11 LlemoningL

I got same problem, if you do the监督 task, you can try set "with_label=True" in your config file, this works to me.Good luck! problem solved, tks.

BARBERUM avatar Jan 30 '24 03:01 BARBERUM

I have the same problem; however, this is an unsupervised task. Could someone please assist me? Thanks!

pKYZ avatar Aug 07 '24 05:08 pKYZ