mmselfsup
mmselfsup copied to clipboard
AttributeError: 'NoneType' object has no attribute 'keys'
my dataset config:
val_pipeline = [ dict(type='Resize', size=224), dict(type='ToTensor'), dict(type='Normalize', **img_norm_cfg), ]
data = dict( samples_per_gpu=16, # 256*16(gpu)=4096 workers_per_gpu=4, train=dict( type=dataset_type, data_source=dict( type=data_source, data_prefix='data/imagenet/train', ann_file='data/imagenet/meta/train.txt', ), num_views=[1, 1], pipelines=[train_pipeline1, train_pipeline2], prefetch=prefetch,), val=dict( type='SingleViewDataset', data_source=dict( type=data_source, data_prefix='data/imagenet/val', ann_file='data/imagenet/meta/val.txt',), pipeline=val_pipeline , prefetch=prefetch))
[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> ] 9/11, 13.3 task/s, elapsed: 1s, ETA: 0s runner.run(data_loaders, cfg.workflow) File "/home/jony/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 130, in run epoch_runner(data_loaders[i], **kwargs) File "/home/jony/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/epoch_based_runner.py", line 56, in train self.call_hook('after_train_epoch') File "/home/jony/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/base_runner.py", line 309, in call_hook getattr(hook, fn_name)(self) File "/home/jony/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 267, in after_train_epoch self._do_evaluate(runner) File "/home/jony/anaconda3/envs/open-mmlab/lib/python3.7/site-packages/mmcv/runner/hooks/evaluation.py", line 503, in _do_evaluate gpu_collect=self.gpu_collect) File "/home/jony/workspace/Project/mmselfsup/mmselfsup/utils/test_helper.py", line 28, in multi_gpu_test len(data_loader.dataset)) File "/home/jony/workspace/Project/mmselfsup/mmselfsup/utils/collect.py", line 71, in dist_forward_collect for k in results[0].keys(): AttributeError: 'NoneType' object has no attribute 'keys'
Please provide more information about the running command, which version, what dataset, and environment information to help the discussion. Thanks
running command: bash tools/dist_train.sh configs/selfsup/org/mocov3.py 2 --work_dir work_dirs/mocov3/
dataset:
Can you share a example on how to use evaluation to train mocov3?
I added a code line to the original moco config file: evaluation = dict(interval=1, save_best="auto", metric='accuracy', metric_options={'topk': (1, )})
Then I got a error: AttributeError: 'ConfigDict' object has no attribute 'val'
So I add val_pipeline = [ dict(type='Resize', size=224), dict(type='ToTensor'), dict(type='Normalize', **img_norm_cfg), ] val=dict( type='SingleViewDataset', data_source=dict( type=data_source, data_prefix='data/imagenet/val', ann_file='data/imagenet/meta/val.txt',), pipeline=val_pipeline , prefetch=prefetch))
And then got errorlike above
Can you share a example on how to use evaluation to train mocov3? I added a code line to the original moco config file: evaluation = dict(interval=1, save_best="auto", metric='accuracy', metric_options={'topk': (1, )}) Then I got a error: AttributeError: 'ConfigDict' object has no attribute 'val' So I add
val_pipeline = [ dict(type='Resize', size=224), dict(type='ToTensor'), dict(type='Normalize', **img_norm_cfg), ] val=dict( type='SingleViewDataset', data_source=dict( type=data_source, data_prefix='data/imagenet/val', ann_file='data/imagenet/meta/val.txt',), pipeline=val_pipeline , prefetch=prefetch))
And then got errorlike above
Thanks for your reporting. Actually, we don't support to run evaluation in pre-training process. Because this self-supervised learning task doesn't output the accuracy like classification task.
Okay,
Can you provide a test case for tools/test.py ? I can't find it in document
python tools/test.py configs/selfsup/org/mocov3.py work_dirs/mocov3/latest.pth --gpu-id 0
I use this commond, got the same error.
python tools/test.py configs/selfsup/org/mocov3.py work_dirs/mocov3/latest.pth --gpu-id 0
I use this commond, got the same error.
Sorry for the inconvenience. Currently, the test.py file only support the model after fine tuning or linear evaluation in downstream tasks. The pre-training configs and models are not supported. We will improve this in the future version.