ValueError: need at least one array to concatenate
使用pycharm进行代码修改并sftp推到服务器上训练遇到问题。windows系统执行sh文件进行训练没有问题但是使用Ubuntu系统执行sh文件报错,但是在pycharm中使用远程解释器进行训练又可以训练没报错,下面是sh文件内容和报错信息。
python tools/train.py \
configs/my/ssd300_coco.py \
--work-dir runs \
--resume \
--amp
#--resume \
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.9.0 (default, Nov 15 2020, 14:28:56) [GCC 7.3.0]
CUDA available: True
numpy_random_seed: 746404957
GPU 0: NVIDIA GeForce RTX 2070 SUPER
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
PyTorch: 2.0.0+cu117
PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201703
- Intel(R) oneAPI Math Kernel Library Version 2022.2-Product Build 20220804 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.7
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.9 (built against CUDA 11.8)
- Built with CuDNN 8.5
- Magma 2.6.1
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=1, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.15.1+cu117
OpenCV: 4.7.0
MMEngine: 0.8.4
Runtime environment:
cudnn_benchmark: False
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: 746404957
Distributed launcher: none
Distributed training: False
GPU number: 1
------------------------------------------------------------
09/12 15:44:09 - mmengine - INFO - Config:
METAINFO = dict(
classes=(
'bulk cargo carrier',
'ore carrier',
'fishing boat',
'general cargo ship',
'container ship',
'passenger ship',
),
palette=[
(
220,
20,
60,
),
(
119,
11,
32,
),
(
0,
0,
142,
),
(
0,
0,
230,
),
(
106,
0,
228,
),
(
0,
60,
100,
),
])
auto_scale_lr = dict(base_batch_size=16, enable=False)
backend_args = None
cudnn_benchmark = True
custom_hooks = [
dict(type='NumClassCheckHook'),
dict(interval=50, priority='VERY_LOW', type='CheckInvalidLossHook'),
]
data_root = '/srv/samba/dingwenchao/SeaShips/COCO/'
dataset_type = 'CocoDataset'
default_hooks = dict(
checkpoint=dict(interval=1, type='CheckpointHook'),
logger=dict(interval=50, type='LoggerHook'),
param_scheduler=dict(type='ParamSchedulerHook'),
sampler_seed=dict(type='DistSamplerSeedHook'),
timer=dict(type='IterTimerHook'),
visualization=dict(type='DetVisualizationHook'))
default_scope = 'mmdet'
env_cfg = dict(
cudnn_benchmark=False,
dist_cfg=dict(backend='nccl'),
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0))
input_size = 300
launcher = 'none'
load_from = None
log_level = 'INFO'
log_processor = dict(by_epoch=True, type='LogProcessor', window_size=50)
max_epochs = 200
model = dict(
backbone=dict(
ceil_mode=True,
depth=16,
init_cfg=dict(
checkpoint='open-mmlab://vgg16_caffe', type='Pretrained'),
out_feature_indices=(
22,
34,
),
out_indices=(
3,
4,
),
type='SSDVGG',
with_last_pool=False),
bbox_head=dict(
anchor_generator=dict(
basesize_ratio_range=(
0.15,
0.9,
),
input_size=300,
ratios=[
[
2,
],
[
2,
3,
],
[
2,
3,
],
[
2,
3,
],
[
2,
],
[
2,
],
],
scale_major=False,
strides=[
8,
16,
32,
64,
100,
300,
],
type='SSDAnchorGenerator'),
bbox_coder=dict(
target_means=[
0.0,
0.0,
0.0,
0.0,
],
target_stds=[
0.1,
0.1,
0.2,
0.2,
],
type='DeltaXYWHBBoxCoder'),
in_channels=(
512,
1024,
512,
256,
256,
256,
),
num_classes=6,
type='SSDHead'),
data_preprocessor=dict(
bgr_to_rgb=True,
mean=[
123.675,
116.28,
103.53,
],
pad_size_divisor=1,
std=[
1,
1,
1,
],
type='DetDataPreprocessor'),
neck=dict(
in_channels=(
512,
1024,
),
l2_norm_scale=20,
level_paddings=(
1,
1,
0,
0,
),
level_strides=(
2,
2,
1,
1,
),
out_channels=(
512,
1024,
512,
256,
256,
256,
),
type='SSDNeck'),
test_cfg=dict(
max_per_img=200,
min_bbox_size=0,
nms=dict(iou_threshold=0.45, type='nms'),
nms_pre=1000,
score_thr=0.02),
train_cfg=dict(
allowed_border=-1,
assigner=dict(
gt_max_assign_all=False,
ignore_iof_thr=-1,
min_pos_iou=0.0,
neg_iou_thr=0.5,
pos_iou_thr=0.5,
type='MaxIoUAssigner'),
debug=False,
neg_pos_ratio=3,
pos_weight=-1,
sampler=dict(type='PseudoSampler'),
smoothl1_beta=1.0),
type='SingleStageDetector')
optim_wrapper = dict(
optimizer=dict(lr=0.002, momentum=0.9, type='SGD', weight_decay=0.0005),
type='OptimWrapper')
param_scheduler = [
dict(
begin=0, by_epoch=False, end=500, start_factor=0.001, type='LinearLR'),
dict(
begin=0,
by_epoch=True,
end=24,
gamma=0.1,
milestones=[
16,
22,
],
type='MultiStepLR'),
]
resume = False
test_cfg = dict(type='TestLoop')
test_dataloader = dict(
batch_size=1,
dataset=dict(
ann_file='/srv/samba/dingwenchao/SeaShips/COCO/annotations/test.json',
backend_args=None,
data_prefix=dict(img='test/'),
data_root='/srv/samba/dingwenchao/SeaShips/COCO/',
pipeline=[
dict(backend_args=None, type='LoadImageFromFile'),
dict(keep_ratio=False, scale=(
300,
300,
), type='Resize'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
meta_keys=(
'img_id',
'img_path',
'ori_shape',
'img_shape',
'scale_factor',
),
type='PackDetInputs'),
],
test_mode=True,
type='CocoDataset'),
drop_last=False,
num_workers=2,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
test_evaluator = dict(
ann_file='/srv/samba/dingwenchao/SeaShips/COCO/annotations/val.json',
backend_args=None,
format_only=False,
metric='bbox',
type='CocoMetric')
test_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(keep_ratio=False, scale=(
300,
300,
), type='Resize'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
meta_keys=(
'img_id',
'img_path',
'ori_shape',
'img_shape',
'scale_factor',
),
type='PackDetInputs'),
]
train_cfg = dict(max_epochs=200, type='EpochBasedTrainLoop', val_interval=1)
train_dataloader = dict(
batch_sampler=None,
batch_size=16,
dataset=dict(
dataset=dict(
ann_file=
'/srv/samba/dingwenchao/SeaShips/COCO/annotations/train.json',
backend_args=None,
data_prefix=dict(img='train/'),
data_root='/srv/samba/dingwenchao/SeaShips/COCO/',
filter_cfg=dict(filter_empty_gt=True, min_size=32),
pipeline=[
dict(backend_args=None, type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
mean=[
123.675,
116.28,
103.53,
],
ratio_range=(
1,
4,
),
to_rgb=True,
type='Expand'),
dict(
min_crop_size=0.3,
min_ious=(
0.1,
0.3,
0.5,
0.7,
0.9,
),
type='MinIoURandomCrop'),
dict(keep_ratio=False, scale=(
300,
300,
), type='Resize'),
dict(prob=0.5, type='RandomFlip'),
dict(
brightness_delta=32,
contrast_range=(
0.5,
1.5,
),
hue_delta=18,
saturation_range=(
0.5,
1.5,
),
type='PhotoMetricDistortion'),
dict(type='PackDetInputs'),
],
type='CocoDataset'),
times=5,
type='RepeatDataset'),
num_workers=8,
persistent_workers=True,
sampler=dict(shuffle=True, type='DefaultSampler'))
train_pipeline = [
dict(backend_args=None, type='LoadImageFromFile'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
mean=[
123.675,
116.28,
103.53,
],
ratio_range=(
1,
4,
),
to_rgb=True,
type='Expand'),
dict(
min_crop_size=0.3,
min_ious=(
0.1,
0.3,
0.5,
0.7,
0.9,
),
type='MinIoURandomCrop'),
dict(keep_ratio=False, scale=(
300,
300,
), type='Resize'),
dict(prob=0.5, type='RandomFlip'),
dict(
brightness_delta=32,
contrast_range=(
0.5,
1.5,
),
hue_delta=18,
saturation_range=(
0.5,
1.5,
),
type='PhotoMetricDistortion'),
dict(type='PackDetInputs'),
]
val_cfg = dict(type='ValLoop')
val_dataloader = dict(
batch_size=16,
dataset=dict(
ann_file='/srv/samba/dingwenchao/SeaShips/COCO/annotations/val.json',
backend_args=None,
data_prefix=dict(img='val/'),
data_root='/srv/samba/dingwenchao/SeaShips/COCO/',
pipeline=[
dict(backend_args=None, type='LoadImageFromFile'),
dict(keep_ratio=False, scale=(
300,
300,
), type='Resize'),
dict(type='LoadAnnotations', with_bbox=True),
dict(
meta_keys=(
'img_id',
'img_path',
'ori_shape',
'img_shape',
'scale_factor',
),
type='PackDetInputs'),
],
test_mode=True,
type='CocoDataset'),
drop_last=False,
num_workers=8,
persistent_workers=True,
sampler=dict(shuffle=False, type='DefaultSampler'))
val_evaluator = dict(
ann_file='/srv/samba/dingwenchao/SeaShips/COCO/annotations/val.json',
backend_args=None,
format_only=False,
metric='bbox',
type='CocoMetric')
vis_backends = [
dict(type='LocalVisBackend'),
]
visualizer = dict(
name='visualizer',
type='DetLocalVisualizer',
vis_backends=[
dict(type='LocalVisBackend'),
])
work_dir = './work_dirs/ssd300_coco'
09/12 15:44:10 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
09/12 15:44:10 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
--------------------
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
--------------------
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
(NORMAL ) NumClassCheckHook
--------------------
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
--------------------
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
(VERY_LOW ) CheckInvalidLossHook
--------------------
after_train_epoch:
(NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_val_epoch:
(NORMAL ) IterTimerHook
(NORMAL ) NumClassCheckHook
--------------------
before_val_iter:
(NORMAL ) IterTimerHook
--------------------
after_val_iter:
(NORMAL ) IterTimerHook
(NORMAL ) DetVisualizationHook
(BELOW_NORMAL) LoggerHook
--------------------
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_val:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook
--------------------
before_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
before_test_epoch:
(NORMAL ) IterTimerHook
--------------------
before_test_iter:
(NORMAL ) IterTimerHook
--------------------
after_test_iter:
(NORMAL ) IterTimerHook
(NORMAL ) DetVisualizationHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test:
(VERY_HIGH ) RuntimeInfoHook
--------------------
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
loading annotations into memory...
Done (t=0.03s)
creating index...
index created!
Traceback (most recent call last):
File "/srv/samba/dingwenchao/mmdetection/tools/train.py", line 106, in <module>
main()
File "/srv/samba/dingwenchao/mmdetection/tools/train.py", line 102, in main
runner.train()
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1703, in train
self._train_loop = self.build_train_loop(
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1495, in build_train_loop
loop = LOOPS.build(
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/runner/loops.py", line 44, in __init__
super().__init__(runner, dataloader)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/runner/base_loop.py", line 26, in __init__
self.dataloader = runner.build_dataloader(
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1353, in build_dataloader
dataset = DATASETS.build(dataset_cfg)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/dataset/dataset_wrapper.py", line 211, in __init__
self.dataset = DATASETS.build(dataset)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/registry/registry.py", line 570, in build
return self.build_func(cfg, *args, **kwargs, registry=self)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/registry/build_functions.py", line 121, in build_from_cfg
obj = obj_cls(**args) # type: ignore
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmdet/datasets/base_det_dataset.py", line 44, in __init__
super().__init__(*args, **kwargs)
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/dataset/base_dataset.py", line 245, in __init__
self.full_init()
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmdet/datasets/base_det_dataset.py", line 82, in full_init
self.data_bytes, self.data_address = self._serialize_data()
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/dataset/base_dataset.py", line 765, in _serialize_data
data_bytes = np.concatenate(data_list)
File "<__array_function__ internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
我出现这个问题是因为修改了数据集类别,使用python setup.py install重新编译一下
File "/home/ding/.conda/envs/pt/lib/python3.9/site-packages/mmengine/dataset/base_dataset.py", line 765, in _serialize_data data_bytes = np.concatenate(data_list) 很可能是上面的中data_list为空所有致,data_list不能为空。可以通过调试一下data_list相关的初始化流程来进一步定位问题。
>>> import numpy as np
>>> np.concatenate([])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<__array_function__ internals>", line 6, in concatenate
ValueError: need at least one array to concatenate
我出现这个问题是因为修改了数据集类别,使用python setup.py install重新编译一下
您好,请问是要在mmdetection文件夹第一个目录里进行重新setup.py的编译吗?
我出现这个问题是因为修改了数据集类别,使用python setup.py install重新编译一下
您好,请问是要在mmdetection文件夹第一个目录里进行重新setup.py的编译吗?
就是在mmdetection这个文件夹下,你也可以搜一下mmdetection更改类别教程,里面会提到
I faced the same problem. yeah, I fixed the metainfo.classes in config file, which I take a spelling error. this classes name must be match to datasets' categories name, then it can be trained
我出现这个问题是因为修改了数据集类别,使用python setup.py install重新编译一下
您好,请问是要在mmdetection文件夹第一个目录里进行重新setup.py的编译吗?
就是在mmdetection这个文件夹下,你也可以搜一下mmdetection更改类别教程,里面会提到
噢噢,好的,十分感谢。我还想问下我的是coco数据集也一样需要重新编译吗?因为我看了一些资料都是关于voc数据集的
也是要的
---Original--- From: @.> Date: Tue, Oct 10, 2023 18:02 PM To: @.>; Cc: @.@.>; Subject: Re: [open-mmlab/mmdetection] ValueError: need at least one array toconcatenate (Issue #10916)
我出现这个问题是因为修改了数据集类别,使用python setup.py install重新编译一下
您好,请问是要在mmdetection文件夹第一个目录里进行重新setup.py的编译吗?
就是在mmdetection这个文件夹下,你也可以搜一下mmdetection更改类别教程,里面会提到
噢噢,好的,十分感谢。我还想问下我的是coco数据集也一样需要重新编译吗?因为我看了一些资料都是关于voc数据集的
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
I have the same problem as you, the server failed to run the error array can not concat. I compared the local environment with the server environment, and found that the version of mmcv should be >=2.0.0 <2.1.0, and mmdet was called when the error was reported, but I did not have mmdet locally, so I uninstall mmdet and returned the mmcv version to 2.0.1, the problem was solved. Hope this will help you
I faced the same problem. yeah, I fixed the metainfo.classes in config file, which I take a spelling error. this classes name must be match to datasets' categories name, then it can be trained
Doing this solved this problem for me too!