P3Former
P3Former copied to clipboard
save results
I am trying to run your code using the trained weights you provided, (btw, I am not sure what is the difference between 'semantickitti_test_65.3.pth' and 'semantickitti_val_62.6.pth', both trained on the training dataset?, but one gave higher score?) the code test.py is running, but no .lables files are generated, and I was unable to understand where how to save the results of the network.
Hi. 'Semantickitti_test_65.3.pth' is the result of the test set, training with the trainval set. 'Semantickitti_val_62.6.pth' is the result of the validation set, training with the training set. If you want to get the result of the test set prediction, run this config. If you want to get the result of the validation set, also run this config but change line 78 to ann_file='semantickitti_infos_val.pkl'
thank you for the fast reply.
I run the test.py with the following flags :
--config
/media/nirit/mugiwara/code/P3Former-main/configs/p3former/p3former_8xb2_3x_semantickitti_submit.py
--checkpoint
/media/nirit/mugiwara/code/P3Former-main/weights/semantickitti_val_62.6.pth
--work-dir
/media/nirit/mugiwara/code/P3Former-main/out
where in p3former_8xb2_3x_semantickitti_submit.py I change line 78 like you suggested to run the on the validation data,
but also line 57 to batch_size = 1 since I am running only on a single GPU (Ti3080)
and receive the following error. can you help me with that?
there is a warning regarding to Numba (I installed 0.57.1), and error regarding to "lidar_path" :
`/media/nirit/mugiwara/code/P3Former-main/venv/bin/python /media/nirit/mugiwara/code/P3Former-main/test.py --config /media/nirit/mugiwara/code/P3Former-main/configs/p3former/p3former_8xb2_3x_semantickitti_submit.py --checkpoint /media/nirit/mugiwara/code/P3Former-main/weights/semantickitti_val_62.6.pth --work-dir /media/nirit/mugiwara/code/P3Former-main/out
/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmdet3d/evaluation/functional/kitti_utils/eval.py:10: NumbaDeprecationWarning: The 'nopython' keyword argument was not supplied to the 'numba.jit' decorator. The implicit default value for this argument is currently False, but it will be changed to True in Numba 0.59.0. See https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-object-mode-fall-back-behaviour-when-using-jit for details.
def get_thresholds(scores: np.ndarray, num_gt, num_sample_pts=41):
07/25 12:17:02 - mmengine - INFO -
------------------------------------------------------------
System environment:
sys.platform: linux
Python: 3.8.17 (default, Jun 6 2023, 20:10:50) [GCC 11.3.0]
CUDA available: True
numpy_random_seed: 1496329390
GPU 0: NVIDIA GeForce RTX 3080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.7, V11.7.64
GCC: x86_64-linux-gnu-gcc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0
PyTorch: 1.10.1+cu111
PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.1
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
- CuDNN 8.0.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.1, CUDNN_VERSION=8.0.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.11.2+cu111
OpenCV: 4.8.0
MMEngine: 0.7.4
Runtime environment:
cudnn_benchmark: False
mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
dist_cfg: {'backend': 'nccl'}
seed: 1496329390
Distributed launcher: none
Distributed training: False
GPU number: 1
------------------------------------------------------------
07/25 12:17:04 - mmengine - INFO - Config:
dataset_type = '_SemanticKittiDataset'
data_root = '/media/nirit/mugiwara/datasets/SemanticKitti/'
class_names = [
'car', 'bicycle', 'motorcycle', 'truck', 'bus', 'person', 'bicyclist',
'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground', 'building',
'fence', 'vegetation', 'trunck', 'terrian', 'pole', 'traffic-sign'
]
labels_map = dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4
})
learning_map_inv = dict({
0: 10,
1: 11,
2: 15,
3: 18,
4: 20,
5: 30,
6: 31,
7: 32,
8: 40,
9: 44,
10: 48,
11: 49,
12: 50,
13: 51,
14: 70,
15: 71,
16: 72,
17: 80,
18: 81,
19: 0
})
metainfo = dict(
classes=[
'car', 'bicycle', 'motorcycle', 'truck', 'bus', 'person', 'bicyclist',
'motorcyclist', 'road', 'parking', 'sidewalk', 'other-ground',
'building', 'fence', 'vegetation', 'trunck', 'terrian', 'pole',
'traffic-sign'
],
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4
}),
max_label=259)
input_modality = dict(use_lidar=True, use_camera=False)
backend_args = None
pre_transform = [
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
backend_args=None),
dict(
type='_LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_panoptic_3d=True,
seg_3d_dtype='np.int32',
seg_offset=65536,
dataset_type='semantickitti',
backend_args=None),
dict(type='PointSegClassMapping')
]
train_pipeline = [
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
backend_args=None),
dict(
type='_LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_panoptic_3d=True,
seg_3d_dtype='np.int32',
seg_offset=65536,
dataset_type='semantickitti',
backend_args=None),
dict(type='PointSegClassMapping'),
dict(
type='RandomChoice',
transforms=[[{
'type':
'_LaserMix',
'num_areas': [3, 4, 5, 6],
'pitch_angles': [-25, 3],
'pre_transform': [{
'type': 'LoadPointsFromFile',
'coord_type': 'LIDAR',
'load_dim': 4,
'use_dim': 4
}, {
'type': '_LoadAnnotations3D',
'with_bbox_3d': False,
'with_label_3d': False,
'with_panoptic_3d': True,
'seg_3d_dtype': 'np.int32',
'seg_offset': 65536,
'dataset_type': 'semantickitti'
}, {
'type': 'PointSegClassMapping'
}],
'prob':
0.5
}],
[{
'type':
'_PolarMix',
'instance_classes': [0, 1, 2, 3, 4, 5, 6, 7],
'swap_ratio':
0.5,
'rotate_paste_ratio':
1.0,
'pre_transform': [{
'type': 'LoadPointsFromFile',
'coord_type': 'LIDAR',
'load_dim': 4,
'use_dim': 4
}, {
'type': '_LoadAnnotations3D',
'with_bbox_3d': False,
'with_label_3d': False,
'with_panoptic_3d': True,
'seg_3d_dtype': 'np.int32',
'seg_offset': 65536,
'dataset_type': 'semantickitti'
}, {
'type': 'PointSegClassMapping'
}],
'prob':
0.5
}]],
prob=[0.2, 0.8]),
dict(
type='RandomFlip3D',
sync_2d=False,
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5),
dict(
type='GlobalRotScaleTrans',
rot_range=[-0.78539816, 0.78539816],
scale_ratio_range=[0.95, 1.05],
translation_std=[0.1, 0.1, 0.1]),
dict(
type='Pack3DDetInputs',
keys=['points', 'pts_semantic_mask', 'pts_instance_mask'])
]
test_pipeline = [
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
backend_args=None),
dict(type='Pack3DDetInputs', keys=['points', 'lidar_path'])
]
train_dataloader = dict(
batch_size=1,
num_workers=1,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=dict(
type='RepeatDataset',
times=1,
dataset=dict(
type='_SemanticKittiDataset',
data_root='/media/nirit/mugiwara/datasets/SemanticKitti/',
data_prefix=dict(
pts='',
img='',
pts_instance_mask='',
pts_semantic_mask='',
pts_panoptic_mask=''),
ann_file='semantickitti_infos_train.pkl',
pipeline=[
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
backend_args=None),
dict(
type='_LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_panoptic_3d=True,
seg_3d_dtype='np.int32',
seg_offset=65536,
dataset_type='semantickitti',
backend_args=None),
dict(type='PointSegClassMapping'),
dict(
type='RandomChoice',
transforms=[[{
'type':
'_LaserMix',
'num_areas': [3, 4, 5, 6],
'pitch_angles': [-25, 3],
'pre_transform': [{
'type': 'LoadPointsFromFile',
'coord_type': 'LIDAR',
'load_dim': 4,
'use_dim': 4
}, {
'type': '_LoadAnnotations3D',
'with_bbox_3d': False,
'with_label_3d': False,
'with_panoptic_3d': True,
'seg_3d_dtype': 'np.int32',
'seg_offset': 65536,
'dataset_type': 'semantickitti'
}, {
'type': 'PointSegClassMapping'
}],
'prob':
0.5
}],
[{
'type':
'_PolarMix',
'instance_classes':
[0, 1, 2, 3, 4, 5, 6, 7],
'swap_ratio':
0.5,
'rotate_paste_ratio':
1.0,
'pre_transform': [{
'type': 'LoadPointsFromFile',
'coord_type': 'LIDAR',
'load_dim': 4,
'use_dim': 4
}, {
'type':
'_LoadAnnotations3D',
'with_bbox_3d':
False,
'with_label_3d':
False,
'with_panoptic_3d':
True,
'seg_3d_dtype':
'np.int32',
'seg_offset':
65536,
'dataset_type':
'semantickitti'
}, {
'type':
'PointSegClassMapping'
}],
'prob':
0.5
}]],
prob=[0.2, 0.8]),
dict(
type='RandomFlip3D',
sync_2d=False,
flip_ratio_bev_horizontal=0.5,
flip_ratio_bev_vertical=0.5),
dict(
type='GlobalRotScaleTrans',
rot_range=[-0.78539816, 0.78539816],
scale_ratio_range=[0.95, 1.05],
translation_std=[0.1, 0.1, 0.1]),
dict(
type='Pack3DDetInputs',
keys=['points', 'pts_semantic_mask', 'pts_instance_mask'])
],
metainfo=dict(
classes=[
'car', 'bicycle', 'motorcycle', 'truck', 'bus', 'person',
'bicyclist', 'motorcyclist', 'road', 'parking', 'sidewalk',
'other-ground', 'building', 'fence', 'vegetation',
'trunck', 'terrian', 'pole', 'traffic-sign'
],
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4
}),
max_label=259),
modality=dict(use_lidar=True, use_camera=False),
ignore_index=19,
backend_args=None)))
test_dataloader = dict(
batch_size=1,
num_workers=1,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='RepeatDataset',
times=1,
dataset=dict(
type='_SemanticKittiDataset',
data_root='/media/nirit/mugiwara/datasets/SemanticKitti/',
data_prefix=dict(
pts='',
img='',
pts_instance_mask='',
pts_semantic_mask='',
pts_panoptic_mask=''),
ann_file='semantickitti_infos_val.pkl',
pipeline=[
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
backend_args=None),
dict(type='Pack3DDetInputs', keys=['points', 'lidar_path'])
],
metainfo=dict(
classes=[
'car', 'bicycle', 'motorcycle', 'truck', 'bus', 'person',
'bicyclist', 'motorcyclist', 'road', 'parking', 'sidewalk',
'other-ground', 'building', 'fence', 'vegetation',
'trunck', 'terrian', 'pole', 'traffic-sign'
],
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4
}),
max_label=259),
modality=dict(use_lidar=True, use_camera=False),
ignore_index=19,
test_mode=True,
backend_args=None)))
val_dataloader = dict(
batch_size=1,
num_workers=1,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type='RepeatDataset',
times=1,
dataset=dict(
type='_SemanticKittiDataset',
data_root='/media/nirit/mugiwara/datasets/SemanticKitti/',
data_prefix=dict(
pts='',
img='',
pts_instance_mask='',
pts_semantic_mask='',
pts_panoptic_mask=''),
ann_file='semantickitti_infos_val.pkl',
pipeline=[
dict(
type='LoadPointsFromFile',
coord_type='LIDAR',
load_dim=4,
use_dim=4,
backend_args=None),
dict(
type='_LoadAnnotations3D',
with_bbox_3d=False,
with_label_3d=False,
with_panoptic_3d=True,
seg_3d_dtype='np.int32',
seg_offset=65536,
dataset_type='semantickitti',
backend_args=None),
dict(type='PointSegClassMapping'),
dict(
type='Pack3DDetInputs',
keys=['points', 'pts_semantic_mask', 'pts_instance_mask'])
],
metainfo=dict(
classes=[
'car', 'bicycle', 'motorcycle', 'truck', 'bus', 'person',
'bicyclist', 'motorcyclist', 'road', 'parking', 'sidewalk',
'other-ground', 'building', 'fence', 'vegetation',
'trunck', 'terrian', 'pole', 'traffic-sign'
],
seg_label_mapping=dict({
0: 19,
1: 19,
10: 0,
11: 1,
13: 4,
15: 2,
16: 4,
18: 3,
20: 4,
30: 5,
31: 6,
32: 7,
40: 8,
44: 9,
48: 10,
49: 11,
50: 12,
51: 13,
52: 19,
60: 8,
70: 14,
71: 15,
72: 16,
80: 17,
81: 18,
99: 19,
252: 0,
253: 6,
254: 5,
255: 7,
256: 4,
257: 4,
258: 3,
259: 4
}),
max_label=259),
modality=dict(use_lidar=True, use_camera=False),
ignore_index=19,
test_mode=True,
backend_args=None)))
val_evaluator = dict(
type='_PanopticSegMetric',
thing_class_inds=[0, 1, 2, 3, 4, 5, 6, 7],
stuff_class_inds=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18],
min_num_points=50,
id_offset=65536,
dataset_type='semantickitti',
learning_map_inv=dict({
0: 10,
1: 11,
2: 15,
3: 18,
4: 20,
5: 30,
6: 31,
7: 32,
8: 40,
9: 44,
10: 48,
11: 49,
12: 50,
13: 51,
14: 70,
15: 71,
16: 72,
17: 80,
18: 81,
19: 0
}))
test_evaluator = dict(
type='_PanopticSegMetric',
thing_class_inds=[0, 1, 2, 3, 4, 5, 6, 7],
stuff_class_inds=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18],
min_num_points=50,
id_offset=65536,
dataset_type='semantickitti',
learning_map_inv=dict({
0: 10,
1: 11,
2: 15,
3: 18,
4: 20,
5: 30,
6: 31,
7: 32,
8: 40,
9: 44,
10: 48,
11: 49,
12: 50,
13: 51,
14: 70,
15: 71,
16: 72,
17: 80,
18: 81,
19: 0
}),
submission_prefix='semantickitti_submission')
vis_backends = [dict(type='LocalVisBackend')]
visualizer = dict(
type='Det3DLocalVisualizer',
vis_backends=[dict(type='LocalVisBackend')],
name='visualizer')
grid_shape = [480, 360, 32]
model = dict(
type='_P3Former',
data_preprocessor=dict(
type='_Det3DDataPreprocessor',
voxel=True,
voxel_type='cylindrical',
voxel_layer=dict(
grid_shape=[480, 360, 32],
point_cloud_range=[0, -3.14159265359, -4, 50, 3.14159265359, 2],
max_num_points=-1,
max_voxels=-1)),
voxel_encoder=dict(
type='SegVFE',
feat_channels=[64, 128, 256, 256],
in_channels=6,
with_voxel_center=True,
feat_compression=16,
return_point_feats=False),
backbone=dict(
type='_Asymm3DSpconv',
grid_size=[480, 360, 32],
input_channels=16,
base_channels=32,
norm_cfg=dict(type='BN1d', eps=1e-05, momentum=0.1),
more_conv=True,
out_channels=256),
decode_head=dict(
type='_P3FormerHead',
num_classes=20,
num_queries=128,
embed_dims=256,
point_cloud_range=[0, -3.14159265359, -4, 50, 3.14159265359, 2],
assigner_zero_layer_cfg=dict(
type='mmdet.HungarianAssigner',
match_costs=[
dict(
type='mmdet.FocalLossCost',
weight=1.0,
binary_input=True,
gamma=2.0,
alpha=0.25),
dict(type='mmdet.DiceCost', weight=2.0, pred_act=True)
]),
assigner_cfg=dict(
type='mmdet.HungarianAssigner',
match_costs=[
dict(
type='mmdet.FocalLossCost',
gamma=4.0,
alpha=0.25,
weight=1.0),
dict(
type='mmdet.FocalLossCost',
weight=1.0,
binary_input=True,
gamma=2.0,
alpha=0.25),
dict(type='mmdet.DiceCost', weight=2.0, pred_act=True)
]),
sampler_cfg=dict(type='_MaskPseudoSampler'),
loss_mask=dict(
type='mmdet.FocalLoss',
use_sigmoid=True,
gamma=2.0,
alpha=0.25,
reduction='mean',
loss_weight=1.0),
loss_dice=dict(type='mmdet.DiceLoss', loss_weight=2.0),
loss_cls=dict(
type='mmdet.FocalLoss',
use_sigmoid=True,
gamma=4.0,
alpha=0.25,
loss_weight=1.0),
num_decoder_layers=6,
cls_channels=(256, 256, 20),
mask_channels=(256, 256, 256, 256, 256),
thing_class=[0, 1, 2, 3, 4, 5, 6, 7],
stuff_class=[8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18],
ignore_index=19),
train_cfg=None,
test_cfg=dict(mode='whole'))
default_scope = 'mmdet3d'
default_hooks = dict(
timer=dict(type='IterTimerHook'),
logger=dict(type='LoggerHook', interval=50),
param_scheduler=dict(type='ParamSchedulerHook'),
checkpoint=dict(type='CheckpointHook', interval=5),
sampler_seed=dict(type='DistSamplerSeedHook'),
visualization=dict(type='Det3DVisualizationHook'))
env_cfg = dict(
cudnn_benchmark=False,
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'))
log_processor = dict(type='LogProcessor', window_size=50, by_epoch=True)
log_level = 'INFO'
load_from = '/media/nirit/mugiwara/code/P3Former-main/weights/semantickitti_val_62.6.pth'
resume = False
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=36, val_interval=1)
val_cfg = dict(type='ValLoop')
test_cfg = dict(type='TestLoop')
lr = 0.0008
optim_wrapper = dict(
type='OptimWrapper',
optimizer=dict(type='AdamW', lr=0.0008, weight_decay=0.01))
param_scheduler = [
dict(
type='MultiStepLR',
begin=0,
end=36,
by_epoch=True,
milestones=[24, 32],
gamma=0.2)
]
custom_imports = dict(
imports=[
'p3former.backbones.cylinder3d',
'p3former.data_preprocessors.data_preprocessor',
'p3former.decode_heads.p3former_head', 'p3former.segmentors.p3former',
'p3former.task_modules.samplers.mask_pseduo_sampler',
'evaluation.metrics.panoptic_seg_metric',
'datasets.semantickitti_dataset', 'datasets.transforms.loading',
'datasets.transforms.transforms_3d'
],
allow_failed_imports=False)
launcher = 'none'
work_dir = '/media/nirit/mugiwara/code/P3Former-main/out'
07/25 12:17:08 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
07/25 12:17:08 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
--------------------
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
--------------------
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
--------------------
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
--------------------
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_train_epoch:
(NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
before_val_epoch:
(NORMAL ) IterTimerHook
--------------------
before_val_iter:
(NORMAL ) IterTimerHook
--------------------
after_val_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(BELOW_NORMAL) LoggerHook
--------------------
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
--------------------
after_train:
(VERY_LOW ) CheckpointHook
--------------------
before_test_epoch:
(NORMAL ) IterTimerHook
--------------------
before_test_iter:
(NORMAL ) IterTimerHook
--------------------
after_test_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(BELOW_NORMAL) LoggerHook
--------------------
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
--------------------
after_run:
(BELOW_NORMAL) LoggerHook
--------------------
07/25 12:17:10 - mmengine - WARNING - The prefix is not set in metric class _PanopticSegMetric.
Loads checkpoint by local backend from path: /media/nirit/mugiwara/code/P3Former-main/weights/semantickitti_val_62.6.pth
07/25 12:17:11 - mmengine - INFO - Load checkpoint from /media/nirit/mugiwara/code/P3Former-main/weights/semantickitti_val_62.6.pth
Traceback (most recent call last):
File "/media/nirit/mugiwara/code/P3Former-main/test.py", line 126, in <module>
main()
File "/media/nirit/mugiwara/code/P3Former-main/test.py", line 122, in main
runner.test()
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1767, in test
metrics = self.test_loop.run() # type: ignore
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmengine/runner/loops.py", line 434, in run
for idx, data_batch in enumerate(self.dataloader):
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/_utils.py", line 434, in reraise
raise exception
NotImplementedError: Caught NotImplementedError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmengine/dataset/dataset_wrapper.py", line 278, in __getitem__
return self.dataset[sample_idx]
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 401, in __getitem__
data = self.prepare_data(idx)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmdet3d/datasets/seg3d_dataset.py", line 304, in prepare_data
return super().prepare_data(idx)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 790, in prepare_data
return self.pipeline(data_info)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 58, in __call__
data = t(data)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in __call__
return self.transform(results)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmdet3d/datasets/transforms/formating.py", line 119, in transform
return self.pack_single_results(results)
File "/media/nirit/mugiwara/code/P3Former-main/venv/lib/python3.8/site-packages/mmdet3d/datasets/transforms/formating.py", line 219, in pack_single_results
raise NotImplementedError(f'Please modified '
NotImplementedError: Please modified `Pack3DDetInputs` to put lidar_path to corresponding field
Process finished with exit code 1
`
Sorry for the late reply. It seems to be a mistake made by me. I have fixed it. You can pull the new version and try again. Note that if the prediction is done, it will raise an error "'NoneType' object has no attribute 'keys'" due to the small incompatibility with mmdet3d. It's normal.
thank you. it runs now, but the results still not been saved. I saw this. does this relates? or do I need to do something else in order to save the results?
It should be saved in the path specified in the configs named "submission_prefix". Have you checked it?
Hi, @xizaoqu I encountered the problem that you mentioned, it reported AttributeError 'NoneType' object has no attribute 'keys'. and I pulled the latest code last week.
the command is as follows: $ python test.py configs/p3former/p3former_8xb2_3x_semantickitti_submit.py semantickitti_test_65.3.pth
the log file is as follows: 20240328_154619.log
the config file is as follows, I replaced the .log suffix with the .py suffix since file limitation config.log
03/28 15:46:31 - mmengine - WARNING - The prefix is not set in metric class _PanopticSegMetric. Loads checkpoint by local backend from path: semantickitti_test_65.3.pth 03/28 15:46:35 - mmengine - INFO - Load checkpoint from semantickitti_test_65.3.pth 03/28 15:46:59 - mmengine - INFO - Epoch(test) [50/50] eta: 0:00:00 time: 0.4807 data_time: 0.0023 memory: 929
Traceback (most recent call last): File "test.py", line 131, inmain() File "test.py", line 117, in main if 'runner_type' not in cfg: File "/home/zhenghu/anaconda3/envs/p3former1/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1767, in test metrics = self.test_loop.run() # type: ignore File "/home/zhenghu/anaconda3/envs/p3former1/lib/python3.8/site-packages/mmengine/runner/loops.py", line 438, in run metrics = self.evaluator.evaluate(len(self.dataloader.dataset)) File "/home/zhenghu/anaconda3/envs/p3former1/lib/python3.8/site-packages/mmengine/evaluator/evaluator.py", line 84, in evaluate for name in _results.keys(): AttributeError: 'NoneType' object has no attribute 'keys'
Do you have any suggestions? Think you!
Another question,I have not A100 platform,what suggestions you have for the small memory card platform,or skill? Think you!!!
Another question,I have not A100 platform,what suggestions you have for the small memory card platform,or skill? Think you!!!
Hi, you can also use V100 with ~32G memory with a smaller batch size.
AttributeError: 'NoneType' object has no attribute 'keys'
Do you have any suggestions? Think you!
Have the results been saved? If so, never mind this Error.