MVX-Net low perfomance pedestrian AOS
Prerequisite
- [x] I have searched Issues and Discussions but cannot get the expected help.
- [x] I have read the FAQ documentation but cannot get the expected help.
- [x] The bug has not been fixed in the latest version (dev-1.x) or latest version (dev-1.0).
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmdetection3d
Environment
sys.platform: linux Python: 3.10.12 (main, Nov 6 2024, 20:22:13) [GCC 11.4.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: Orin CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 12.6, V12.6.77 GCC: aarch64-linux-gnu-gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 PyTorch: 2.5.0 PyTorch compiling details: PyTorch built with:
- GCC 11.4
- C++ Version: 201703
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: NO AVX
- CUDA Runtime 12.6
- NVCC architecture flags: -gencode;arch=compute_87,code=sm_87
- CuDNN 90.4
- Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CUDA_VERSION=12.6, CUDNN_VERSION=9.4.0, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=1 -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -Wno-missing-braces -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_VERSION=2.5.0, USE_CUDA=ON, USE_CUDNN=ON, USE_CUSPARSELT=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_GLOO=1, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=0, USE_NCCL=1, USE_NNPACK=1, USE_OPENMP=ON, USE_ROCM=OFF, USE_ROCM_KERNEL_ASSERT=OFF,
TorchVision: 0.20.0 OpenCV: 4.10.0-dev MMEngine: 0.10.5 MMDetection: 3.2.0 MMDetection3D: 1.4.0+962f093 spconv2.0: False
Reproduces the problem - code sample
This schedule is mainly used by models with dynamic voxelization
optimizer
lr = 0.003 # max learning rate optim_wrapper = dict( type='OptimWrapper', optimizer=dict( type='AdamW', lr=lr, weight_decay=0.001, betas=(0.95, 0.99)), clip_grad=dict(max_norm=10, norm_type=2), )
param_scheduler = [ dict(type='LinearLR', start_factor=0.1, by_epoch=False, begin=0, end=1000), dict( type='CosineAnnealingLR', begin=0, T_max=60, end=60, by_epoch=True, eta_min=1e-5) ]
training schedule for 1x
train_cfg = dict(type='EpochBasedTrainLoop', max_epochs=60, val_interval=1) val_cfg = dict(type='ValLoop') test_cfg = dict(type='TestLoop')
Default setting for scaling LR automatically
- enable means enable scaling LR automatically
or not by default.
- base_batch_size = (8 GPUs) x (2 samples per GPU).
auto_scale_lr = dict(enable=False, base_batch_size=2)
base = ['../base/schedules/cosine.py', '../base/default_runtime.py']
model settings
voxel_size = [0.05, 0.05, 0.1] point_cloud_range = [0, -40, -3, 70.4, 40, 1]
model = dict( type='DynamicMVXFasterRCNN', data_preprocessor=dict( type='Det3DDataPreprocessor', voxel=True, voxel_type='dynamic', voxel_layer=dict( max_num_points=-1, point_cloud_range=point_cloud_range, voxel_size=voxel_size, max_voxels=(-1, -1)), mean=[102.9801, 115.9465, 122.7717], std=[1.0, 1.0, 1.0], bgr_to_rgb=False, pad_size_divisor=32), img_backbone=dict( type='mmdet.ResNet', depth=50, num_stages=4, out_indices=(0, 1, 2, 3), frozen_stages=1, norm_cfg=dict(type='BN', requires_grad=False), norm_eval=True, style='caffe'), img_neck=dict( type='mmdet.FPN', in_channels=[256, 512, 1024, 2048], out_channels=256, # make the image features more stable numerically to avoid loss nan norm_cfg=dict(type='BN', requires_grad=False), num_outs=5), pts_voxel_encoder=dict( type='DynamicVFE', in_channels=4, feat_channels=[64, 64], with_distance=False, voxel_size=voxel_size, with_cluster_center=True, with_voxel_center=True, point_cloud_range=point_cloud_range, fusion_layer=dict( type='PointFusion', img_channels=256, pts_channels=64, mid_channels=128, out_channels=128, img_levels=[0, 1, 2, 3, 4], align_corners=False, activate_out=True, fuse_out=False)), pts_middle_encoder=dict( type='SparseEncoder', in_channels=128, sparse_shape=[41, 1600, 1408], order=('conv', 'norm', 'act')), pts_backbone=dict( type='SECOND', in_channels=256, layer_nums=[5, 5], layer_strides=[1, 2], out_channels=[128, 256]), pts_neck=dict( type='SECONDFPN', in_channels=[128, 256], upsample_strides=[1, 2], out_channels=[256, 256]), pts_bbox_head=dict( type='Anchor3DHead', num_classes=3, in_channels=512, feat_channels=512, use_direction_classifier=True, anchor_generator=dict( type='Anchor3DRangeGenerator', ranges=[ [0, -40.0, -0.6, 70.4, 40.0, -0.6], [0, -40.0, -0.6, 70.4, 40.0, -0.6], [0, -40.0, -1.78, 70.4, 40.0, -1.78], ], sizes=[[0.8, 0.6, 1.73], [1.76, 0.6, 1.73], [3.9, 1.6, 1.56]], rotations=[0, 1.57], reshape_out=False), assigner_per_size=True, diff_rad_by_sin=True, assign_per_class=True, bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), loss_cls=dict( type='mmdet.FocalLoss', use_sigmoid=True, gamma=2.0, alpha=0.25, loss_weight=1.0), loss_bbox=dict( type='mmdet.SmoothL1Loss', beta=1.0 / 9.0, loss_weight=2.0), loss_dir=dict( type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=0.2)), # model training and testing settings train_cfg=dict( type='EpochBasedTrainLoop', pts=dict( assigner=[ dict( # for Pedestrian type='Max3DIoUAssigner', iou_calculator=dict(type='BboxOverlapsNearest3D'), pos_iou_thr=0.35, neg_iou_thr=0.2, min_pos_iou=0.2, ignore_iof_thr=-1), dict( # for Cyclist type='Max3DIoUAssigner', iou_calculator=dict(type='BboxOverlapsNearest3D'), pos_iou_thr=0.35, neg_iou_thr=0.2, min_pos_iou=0.2, ignore_iof_thr=-1), dict( # for Car type='Max3DIoUAssigner', iou_calculator=dict(type='BboxOverlapsNearest3D'), pos_iou_thr=0.6, neg_iou_thr=0.45, min_pos_iou=0.45, ignore_iof_thr=-1), ], allowed_border=0, pos_weight=-1, debug=False)), test_cfg=dict( pts=dict( use_rotate_nms=True, nms_across_levels=False, nms_thr=0.01, score_thr=0.1, min_bbox_size=0, nms_pre=100, max_num=50)))
dataset settings
dataset_type = 'KittiDataset' data_root = 'data/kitti/' class_names = ['Pedestrian', 'Cyclist', 'Car'] metainfo = dict(classes=class_names) input_modality = dict(use_lidar=True, use_camera=True) backend_args = None train_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4, backend_args=backend_args), dict(type='LoadImageFromFile', backend_args=backend_args), dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( type='RandomResize', scale=[(640, 192), (2560, 768)], keep_ratio=True), dict( type='GlobalRotScaleTrans', rot_range=[-0.78539816, 0.78539816], scale_ratio_range=[0.95, 1.05], translation_std=[0.2, 0.2, 0.2]), dict(type='RandomFlip3D', flip_ratio_bev_horizontal=0.5), dict(type='PointsRangeFilter', point_cloud_range=point_cloud_range), dict(type='ObjectRangeFilter', point_cloud_range=point_cloud_range), dict(type='PointShuffle'), dict( type='Pack3DDetInputs', keys=[ 'points', 'img', 'gt_bboxes_3d', 'gt_labels_3d', 'gt_bboxes', 'gt_labels' ]) ] test_pipeline = [ dict( type='LoadPointsFromFile', coord_type='LIDAR', load_dim=4, use_dim=4, backend_args=backend_args), dict(type='LoadImageFromFile', backend_args=backend_args), dict( type='MultiScaleFlipAug3D', img_scale=(1280, 384), pts_scale_ratio=1, flip=False, transforms=[ # Temporary solution, fix this after refactor the augtest dict(type='Resize', scale=0, keep_ratio=True), dict( type='GlobalRotScaleTrans', rot_range=[0, 0], scale_ratio_range=[1., 1.], translation_std=[0, 0, 0]), dict(type='RandomFlip3D'), dict( type='PointsRangeFilter', point_cloud_range=point_cloud_range), ]), dict(type='Pack3DDetInputs', keys=['points', 'img']) ] modality = dict(use_lidar=True, use_camera=True) train_dataloader = dict( batch_size=2, num_workers=2, sampler=dict(type='DefaultSampler', shuffle=True), dataset=dict( type='RepeatDataset', times=2, dataset=dict( type=dataset_type, data_root=data_root, modality=modality, ann_file='kitti_infos_train.pkl', data_prefix=dict( pts='training/velodyne_reduced', img='training/image_2'), pipeline=train_pipeline, filter_empty_gt=False, metainfo=metainfo, # we use box_type_3d='LiDAR' in kitti and nuscenes dataset # and box_type_3d='Depth' in sunrgbd and scannet dataset. box_type_3d='LiDAR', backend_args=backend_args)))
val_dataloader = dict( batch_size=1, num_workers=1, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type=dataset_type, data_root=data_root, modality=modality, ann_file='kitti_infos_val.pkl', data_prefix=dict( pts='training/velodyne_reduced', img='training/image_2'), pipeline=test_pipeline, metainfo=metainfo, test_mode=True, box_type_3d='LiDAR', backend_args=backend_args)) test_dataloader = dict( batch_size=1, num_workers=1, sampler=dict(type='DefaultSampler', shuffle=False), dataset=dict( type=dataset_type, data_root=data_root, ann_file='kitti_infos_val.pkl', modality=modality, data_prefix=dict( pts='training/velodyne_reduced', img='training/image_2'), pipeline=test_pipeline, metainfo=metainfo, test_mode=True, box_type_3d='LiDAR', backend_args=backend_args))
optim_wrapper se sobreescribe en cosine.py
optim_wrapper = dict( optimizer=dict(weight_decay=0.01), clip_grad=dict(max_norm=35, norm_type=2), ) val_evaluator = dict( type='KittiMetric', ann_file='data/kitti/kitti_infos_val.pkl') test_evaluator = val_evaluator
You may need to download the model first is the network is unstable
load_from = 'https://download.openmmlab.com/mmdetection3d/pretrain_models/mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth' # noqa
Reproduces the problem - command or script
python3 tools/train.py work_dirs/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class.py
Reproduces the problem - error message
EPOCH 40
EPOCH 49
Additional information
First, I trained MVX-Net for 40 epochs using the original config file (I only changed the batch size to 2 for 1 GPU). The Pedestrian AOS AP results for the 40th epoch were quite low, less than 50%, which I think is very poor. I tried resuming the training for 10 more epochs to improve the results, but it didn't seem to get better (on the contrary, they've worsen) .
I'm using the KITTI dataset with 3769 training and 3712 validation samples. I got the ImageSets from this model: VirConv ImageSets
I would like to know if anyone has achieved better AOS results for pedestrians with MVX-Net and, if so, what changes I should make to the config file. I don't have much experience with model training, so I welcome any advice and support. Thank you!
@AinaraC i have problem all AP=0 so what should i do?
@Vish19-code Have you changed anything from the original config file? Are you using the pertained model or have you trained the model yourself?
Hello,
Thank you for reply
Yes I already train with same config file but the output is not changing all AP are zero
Why I should do?
On Sat, Apr 26, 2025 at 13:49 AinaraC @.***> wrote:
AinaraC left a comment (open-mmlab/mmdetection3d#3080) https://github.com/open-mmlab/mmdetection3d/issues/3080#issuecomment-2832057629
@Vish19-code https://github.com/Vish19-code Have you changed anything from the original config file? Are you using the pertained model or have you trained the model yourself?
— Reply to this email directly, view it on GitHub https://github.com/open-mmlab/mmdetection3d/issues/3080#issuecomment-2832057629, or unsubscribe https://github.com/notifications/unsubscribe-auth/BNMK4A7EWQGEUNZN462BGED23NXGHAVCNFSM6AAAAABW7A6VJKVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDQMZSGA2TONRSHE . You are receiving this because you were mentioned.Message ID: @.***>
Have you seen the training logs? Anything strange?
auto_scale_lr = dict(base_batch_size=2, enable=False) backend_args = None class_names = [ 'Pedestrian', 'Cyclist', 'Car', ] data_root = 'data/kitti/' dataset_type = 'KittiDataset' default_hooks = dict( checkpoint=dict(interval=-1, type='CheckpointHook'), logger=dict(interval=50, type='LoggerHook'), param_scheduler=dict(type='ParamSchedulerHook'), sampler_seed=dict(type='DistSamplerSeedHook'), timer=dict(type='IterTimerHook'), visualization=dict(type='Det3DVisualizationHook')) default_scope = 'mmdet3d' env_cfg = dict( cudnn_benchmark=False, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) input_modality = dict(use_camera=True, use_lidar=True) launcher = 'none' load_from = 'mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth' log_level = 'INFO' log_processor = dict(by_epoch=True, type='LogProcessor', window_size=50) lr = 0.003 metainfo = dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]) modality = dict(use_camera=True, use_lidar=True) model = dict( data_preprocessor=dict( bgr_to_rgb=False, mean=[ 102.9801, 115.9465, 122.7717, ], pad_size_divisor=32, std=[ 1.0, 1.0, 1.0, ], type='Det3DDataPreprocessor', voxel=True, voxel_layer=dict( max_num_points=-1, max_voxels=( -1, -1, ), point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], voxel_size=[ 0.05, 0.05, 0.1, ]), voxel_type='dynamic'), img_backbone=dict( depth=50, frozen_stages=1, norm_cfg=dict(requires_grad=False, type='BN'), norm_eval=True, num_stages=4, out_indices=( 0, 1, 2, 3, ), style='caffe', type='mmdet.ResNet'), img_neck=dict( in_channels=[ 256, 512, 1024, 2048, ], num_outs=5, out_channels=256, type='mmdet.FPN'), pts_backbone=dict( in_channels=256, layer_nums=[ 5, 5, ], layer_strides=[ 1, 2, ], out_channels=[ 128, 256, ], type='SECOND'), pts_bbox_head=dict( anchor_generator=dict( ranges=[ [ 0, -40.0, -0.6, 70.4, 40.0, -0.6, ], [ 0, -40.0, -0.6, 70.4, 40.0, -0.6, ], [ 0, -40.0, -2.78, 70.4, 40.0, -3.78, ], ], reshape_out=False, rotations=[ 0, 1.57, ], sizes=[ [ 0.8, 0.6, 1.73, ], [ 1.76, 0.6, 1.73, ], [ 3.9, 1.6, 3.56, ], ], type='Anchor3DRangeGenerator'), assign_per_class=True, assigner_per_size=True, bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), diff_rad_by_sin=True, feat_channels=512, in_channels=512, loss_bbox=dict( beta=0.1111111111111111, loss_weight=2.0, type='mmdet.SmoothL1Loss'), loss_cls=dict( alpha=0.25, gamma=2.0, loss_weight=1.0, type='mmdet.FocalLoss', use_sigmoid=True), loss_dir=dict( loss_weight=0.2, type='mmdet.CrossEntropyLoss', use_sigmoid=False), num_classes=3, type='Anchor3DHead', use_direction_classifier=True), pts_middle_encoder=dict( in_channels=128, order=( 'conv', 'norm', 'act', ), sparse_shape=[ 41, 1600, 1408, ], type='SparseEncoder'), pts_neck=dict( in_channels=[ 128, 256, ], out_channels=[ 256, 256, ], type='SECONDFPN', upsample_strides=[ 1, 2, ]), pts_voxel_encoder=dict( feat_channels=[ 64, 64, ], fusion_layer=dict( activate_out=True, align_corners=False, fuse_out=False, img_channels=256, img_levels=[ 0, 1, 2, 3, 4, ], mid_channels=128, out_channels=128, pts_channels=64, type='PointFusion'), in_channels=4, point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='DynamicVFE', voxel_size=[ 0.05, 0.05, 0.1, ], with_cluster_center=True, with_distance=False, with_voxel_center=True), test_cfg=dict( pts=dict( max_num=50, min_bbox_size=0, nms_across_levels=False, nms_pre=100, nms_thr=0.01, score_thr=0.1, use_rotate_nms=True)), train_cfg=dict( pts=dict( allowed_border=0, assigner=[ dict( ignore_iof_thr=-1, iou_calculator=dict(type='BboxOverlapsNearest3D'), min_pos_iou=0.2, neg_iou_thr=0.2, pos_iou_thr=0.35, type='Max3DIoUAssigner'), dict( ignore_iof_thr=-1, iou_calculator=dict(type='BboxOverlapsNearest3D'), min_pos_iou=0.2, neg_iou_thr=0.2, pos_iou_thr=0.35, type='Max3DIoUAssigner'), dict( ignore_iof_thr=-1, iou_calculator=dict(type='BboxOverlapsNearest3D'), min_pos_iou=0.45, neg_iou_thr=0.45, pos_iou_thr=0.6, type='Max3DIoUAssigner'), ], debug=False, pos_weight=-1)), type='DynamicMVXFasterRCNN') optim_wrapper = dict( clip_grad=dict(max_norm=35, norm_type=2), optimizer=dict( betas=( 0.95, 0.99, ), lr=0.003, type='AdamW', weight_decay=0.01), type='OptimWrapper') param_scheduler = [ dict(begin=0, by_epoch=False, end=1000, start_factor=0.1, type='LinearLR'), dict( T_max=40, begin=0, by_epoch=True, end=40, eta_min=1e-05, type='CosineAnnealingLR'), ] point_cloud_range = [ 0, -40, -3, 70.4, 40, 2, ] resume = False test_cfg = dict(type='TestLoop') test_dataloader = dict( batch_size=1, dataset=dict( ann_file='kitti_infos_val.pkl', backend_args=None, box_type_3d='LiDAR', data_prefix=dict( img='training/image_2', pts='training/velodyne_reduced'), data_root='data/kitti/', metainfo=dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]), modality=dict(use_camera=True, use_lidar=True), pipeline=[ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( flip=False, img_scale=( 1280, 720, ), pts_scale_ratio=1, transforms=[ dict(keep_ratio=True, scale=0, type='Resize'), dict( rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ], type='GlobalRotScaleTrans'), dict(type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), ], type='MultiScaleFlipAug3D'), dict(keys=[ 'points', 'img', ], type='Pack3DDetInputs'), ], test_mode=True, type='KittiDataset'), num_workers=1, sampler=dict(shuffle=False, type='DefaultSampler')) test_evaluator = dict( ann_file='data/kitti/kitti_infos_val.pkl', metric=[ '3d', 'bev', 'bbox', ], pklfile_prefix='work_dirs/kitti_val_preds', type='KittiMetric') test_pipeline = [ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( flip=False, img_scale=( 1280, 720, ), pts_scale_ratio=1, transforms=[ dict(keep_ratio=True, scale=0, type='Resize'), dict( rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ], type='GlobalRotScaleTrans'), dict(type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), ], type='MultiScaleFlipAug3D'), dict(keys=[ 'points', 'img', ], type='Pack3DDetInputs'), ] train_cfg = dict(max_epochs=40, type='EpochBasedTrainLoop', val_interval=1) train_dataloader = dict( batch_size=1, dataset=dict( dataset=dict( ann_file='kitti_infos_train.pkl', backend_args=None, box_type_3d='LiDAR', data_prefix=dict( img='training/image_2', pts='training/velodyne_reduced'), data_root='data/kitti/', filter_empty_gt=False, metainfo=dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]), modality=dict(use_camera=True, use_lidar=True), pipeline=[ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( keep_ratio=True, scale=[ ( 640, 192, ), ( 1280, 720, ), ], type='RandomResize'), dict( rot_range=[ -0.78539816, 0.78539816, ], scale_ratio_range=[ 0.95, 1.05, ], translation_std=[ 0.2, 0.2, 0.2, ], type='GlobalRotScaleTrans'), dict(flip_ratio_bev_horizontal=0.5, type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='ObjectRangeFilter'), dict(type='PointShuffle'), dict( keys=[ 'points', 'img', 'gt_bboxes_3d', 'gt_labels_3d', 'gt_bboxes', 'gt_labels', ], type='Pack3DDetInputs'), ], type='KittiDataset'), times=2, type='RepeatDataset'), num_workers=2, sampler=dict(shuffle=True, type='DefaultSampler')) train_pipeline = [ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( keep_ratio=True, scale=[ ( 640, 192, ), ( 1280, 720, ), ], type='RandomResize'), dict( rot_range=[ -0.78539816, 0.78539816, ], scale_ratio_range=[ 0.95, 1.05, ], translation_std=[ 0.2, 0.2, 0.2, ], type='GlobalRotScaleTrans'), dict(flip_ratio_bev_horizontal=0.5, type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='ObjectRangeFilter'), dict(type='PointShuffle'), dict( keys=[ 'points', 'img', 'gt_bboxes_3d', 'gt_labels_3d', 'gt_bboxes', 'gt_labels', ], type='Pack3DDetInputs'), ] val_cfg = dict(type='ValLoop') val_dataloader = dict( batch_size=1, dataset=dict( ann_file='kitti_infos_val.pkl', backend_args=None, box_type_3d='LiDAR', data_prefix=dict( img='training/image_2', pts='training/velodyne_reduced'), data_root='data/kitti/', metainfo=dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]), modality=dict(use_camera=True, use_lidar=True), pipeline=[ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( flip=False, img_scale=( 1280, 720, ), pts_scale_ratio=1, transforms=[ dict(keep_ratio=True, scale=0, type='Resize'), dict( rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ], type='GlobalRotScaleTrans'), dict(type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), ], type='MultiScaleFlipAug3D'), dict(keys=[ 'points', 'img', ], type='Pack3DDetInputs'), ], test_mode=True, type='KittiDataset'), num_workers=1, sampler=dict(shuffle=False, type='DefaultSampler')) val_evaluator = dict( ann_file='data/kitti/kitti_infos_val.pkl', metric=[ '3d', 'bev', 'bbox', ], pklfile_prefix='work_dirs/kitti_val_preds', type='KittiMetric') vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( name='visualizer', type='Det3DLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ]) voxel_size = [ 0.05, 0.05, 0.1, ] work_dir = './work_dirs/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class'
this is the config file
025/04/25 16:02:05 - mmengine - INFO -
System environment: sys.platform: linux Python: 3.8.20 (default, Oct 3 2024, 15:24:27) [GCC 11.2.0] CUDA available: True MUSA available: False numpy_random_seed: 941759975 GPU 0: NVIDIA GeForce RTX 2080 CUDA_HOME: /usr NVCC: Cuda compilation tools, release 11.5, V11.5.119 GCC: x86_64-conda_cos7-linux-gnu-gcc (Anaconda gcc) 11.2.0 PyTorch: 1.10.0+cu113 PyTorch compiling details: PyTorch built with:
-
GCC 7.3
-
C++ Version: 201402
-
Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
-
Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
-
OpenMP 201511 (a.k.a. OpenMP 4.5)
-
LAPACK is enabled (usually provided by MKL)
-
NNPACK is enabled
-
CPU capability usage: AVX2
-
CUDA Runtime 11.3
-
NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
-
CuDNN 8.2
-
Magma 2.5.2
-
Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
TorchVision: 0.11.1+cu113 OpenCV: 4.11.0 MMEngine: 0.10.7
Runtime environment: cudnn_benchmark: False mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0} dist_cfg: {'backend': 'nccl'} seed: 941759975 Distributed launcher: none Distributed training: False GPU number: 1
2025/04/25 16:02:06 - mmengine - INFO - Config: auto_scale_lr = dict(base_batch_size=2, enable=False) backend_args = None class_names = [ 'Pedestrian', 'Cyclist', 'Car', ] data_root = 'data/kitti/' dataset_type = 'KittiDataset' default_hooks = dict( checkpoint=dict(interval=-1, type='CheckpointHook'), logger=dict(interval=50, type='LoggerHook'), param_scheduler=dict(type='ParamSchedulerHook'), sampler_seed=dict(type='DistSamplerSeedHook'), timer=dict(type='IterTimerHook'), visualization=dict(type='Det3DVisualizationHook')) default_scope = 'mmdet3d' env_cfg = dict( cudnn_benchmark=False, dist_cfg=dict(backend='nccl'), mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0)) input_modality = dict(use_camera=True, use_lidar=True) launcher = 'none' load_from = 'mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth' log_level = 'INFO' log_processor = dict(by_epoch=True, type='LogProcessor', window_size=50) lr = 0.003 metainfo = dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]) modality = dict(use_camera=True, use_lidar=True) model = dict( data_preprocessor=dict( bgr_to_rgb=False, mean=[ 102.9801, 115.9465, 122.7717, ], pad_size_divisor=32, std=[ 1.0, 1.0, 1.0, ], type='Det3DDataPreprocessor', voxel=True, voxel_layer=dict( max_num_points=-1, max_voxels=( -1, -1, ), point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], voxel_size=[ 0.05, 0.05, 0.1, ]), voxel_type='dynamic'), img_backbone=dict( depth=50, frozen_stages=1, norm_cfg=dict(requires_grad=False, type='BN'), norm_eval=True, num_stages=4, out_indices=( 0, 1, 2, 3, ), style='caffe', type='mmdet.ResNet'), img_neck=dict( in_channels=[ 256, 512, 1024, 2048, ], num_outs=5, out_channels=256, type='mmdet.FPN'), pts_backbone=dict( in_channels=256, layer_nums=[ 5, 5, ], layer_strides=[ 1, 2, ], out_channels=[ 128, 256, ], type='SECOND'), pts_bbox_head=dict( anchor_generator=dict( ranges=[ [ 0, -40.0, -0.6, 70.4, 40.0, -0.6, ], [ 0, -40.0, -0.6, 70.4, 40.0, -0.6, ], [ 0, -40.0, -2.78, 70.4, 40.0, -3.78, ], ], reshape_out=False, rotations=[ 0, 1.57, ], sizes=[ [ 0.8, 0.6, 1.73, ], [ 1.76, 0.6, 1.73, ], [ 3.9, 1.6, 3.56, ], ], type='Anchor3DRangeGenerator'), assign_per_class=True, assigner_per_size=True, bbox_coder=dict(type='DeltaXYZWLHRBBoxCoder'), diff_rad_by_sin=True, feat_channels=512, in_channels=512, loss_bbox=dict( beta=0.1111111111111111, loss_weight=2.0, type='mmdet.SmoothL1Loss'), loss_cls=dict( alpha=0.25, gamma=2.0, loss_weight=1.0, type='mmdet.FocalLoss', use_sigmoid=True), loss_dir=dict( loss_weight=0.2, type='mmdet.CrossEntropyLoss', use_sigmoid=False), num_classes=3, type='Anchor3DHead', use_direction_classifier=True), pts_middle_encoder=dict( in_channels=128, order=( 'conv', 'norm', 'act', ), sparse_shape=[ 41, 1600, 1408, ], type='SparseEncoder'), pts_neck=dict( in_channels=[ 128, 256, ], out_channels=[ 256, 256, ], type='SECONDFPN', upsample_strides=[ 1, 2, ]), pts_voxel_encoder=dict( feat_channels=[ 64, 64, ], fusion_layer=dict( activate_out=True, align_corners=False, fuse_out=False, img_channels=256, img_levels=[ 0, 1, 2, 3, 4, ], mid_channels=128, out_channels=128, pts_channels=64, type='PointFusion'), in_channels=4, point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='DynamicVFE', voxel_size=[ 0.05, 0.05, 0.1, ], with_cluster_center=True, with_distance=False, with_voxel_center=True), test_cfg=dict( pts=dict( max_num=50, min_bbox_size=0, nms_across_levels=False, nms_pre=100, nms_thr=0.01, score_thr=0.1, use_rotate_nms=True)), train_cfg=dict( pts=dict( allowed_border=0, assigner=[ dict( ignore_iof_thr=-1, iou_calculator=dict(type='BboxOverlapsNearest3D'), min_pos_iou=0.2, neg_iou_thr=0.2, pos_iou_thr=0.35, type='Max3DIoUAssigner'), dict( ignore_iof_thr=-1, iou_calculator=dict(type='BboxOverlapsNearest3D'), min_pos_iou=0.2, neg_iou_thr=0.2, pos_iou_thr=0.35, type='Max3DIoUAssigner'), dict( ignore_iof_thr=-1, iou_calculator=dict(type='BboxOverlapsNearest3D'), min_pos_iou=0.45, neg_iou_thr=0.45, pos_iou_thr=0.6, type='Max3DIoUAssigner'), ], debug=False, pos_weight=-1)), type='DynamicMVXFasterRCNN') optim_wrapper = dict( clip_grad=dict(max_norm=35, norm_type=2), optimizer=dict( betas=( 0.95, 0.99, ), lr=0.003, type='AdamW', weight_decay=0.01), type='OptimWrapper') param_scheduler = [ dict(begin=0, by_epoch=False, end=1000, start_factor=0.1, type='LinearLR'), dict( T_max=40, begin=0, by_epoch=True, end=40, eta_min=1e-05, type='CosineAnnealingLR'), ] point_cloud_range = [ 0, -40, -3, 70.4, 40, 2, ] resume = False test_cfg = dict(type='TestLoop') test_dataloader = dict( batch_size=1, dataset=dict( ann_file='kitti_infos_val.pkl', backend_args=None, box_type_3d='LiDAR', data_prefix=dict( img='training/image_2', pts='training/velodyne_reduced'), data_root='data/kitti/', metainfo=dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]), modality=dict(use_camera=True, use_lidar=True), pipeline=[ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( flip=False, img_scale=( 1280, 720, ), pts_scale_ratio=1, transforms=[ dict(keep_ratio=True, scale=0, type='Resize'), dict( rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ], type='GlobalRotScaleTrans'), dict(type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), ], type='MultiScaleFlipAug3D'), dict(keys=[ 'points', 'img', ], type='Pack3DDetInputs'), ], test_mode=True, type='KittiDataset'), num_workers=1, sampler=dict(shuffle=False, type='DefaultSampler')) test_evaluator = dict( ann_file='data/kitti/kitti_infos_val.pkl', metric=[ '3d', 'bev', 'bbox', ], pklfile_prefix='work_dirs/kitti_val_preds', type='KittiMetric') test_pipeline = [ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( flip=False, img_scale=( 1280, 720, ), pts_scale_ratio=1, transforms=[ dict(keep_ratio=True, scale=0, type='Resize'), dict( rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ], type='GlobalRotScaleTrans'), dict(type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), ], type='MultiScaleFlipAug3D'), dict(keys=[ 'points', 'img', ], type='Pack3DDetInputs'), ] train_cfg = dict(max_epochs=40, type='EpochBasedTrainLoop', val_interval=1) train_dataloader = dict( batch_size=1, dataset=dict( dataset=dict( ann_file='kitti_infos_train.pkl', backend_args=None, box_type_3d='LiDAR', data_prefix=dict( img='training/image_2', pts='training/velodyne_reduced'), data_root='data/kitti/', filter_empty_gt=False, metainfo=dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]), modality=dict(use_camera=True, use_lidar=True), pipeline=[ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( keep_ratio=True, scale=[ ( 640, 192, ), ( 1280, 720, ), ], type='RandomResize'), dict( rot_range=[ -0.78539816, 0.78539816, ], scale_ratio_range=[ 0.95, 1.05, ], translation_std=[ 0.2, 0.2, 0.2, ], type='GlobalRotScaleTrans'), dict(flip_ratio_bev_horizontal=0.5, type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='ObjectRangeFilter'), dict(type='PointShuffle'), dict( keys=[ 'points', 'img', 'gt_bboxes_3d', 'gt_labels_3d', 'gt_bboxes', 'gt_labels', ], type='Pack3DDetInputs'), ], type='KittiDataset'), times=2, type='RepeatDataset'), num_workers=2, sampler=dict(shuffle=True, type='DefaultSampler')) train_pipeline = [ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict(type='LoadAnnotations3D', with_bbox_3d=True, with_label_3d=True), dict( keep_ratio=True, scale=[ ( 640, 192, ), ( 1280, 720, ), ], type='RandomResize'), dict( rot_range=[ -0.78539816, 0.78539816, ], scale_ratio_range=[ 0.95, 1.05, ], translation_std=[ 0.2, 0.2, 0.2, ], type='GlobalRotScaleTrans'), dict(flip_ratio_bev_horizontal=0.5, type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='ObjectRangeFilter'), dict(type='PointShuffle'), dict( keys=[ 'points', 'img', 'gt_bboxes_3d', 'gt_labels_3d', 'gt_bboxes', 'gt_labels', ], type='Pack3DDetInputs'), ] val_cfg = dict(type='ValLoop') val_dataloader = dict( batch_size=1, dataset=dict( ann_file='kitti_infos_val.pkl', backend_args=None, box_type_3d='LiDAR', data_prefix=dict( img='training/image_2', pts='training/velodyne_reduced'), data_root='data/kitti/', metainfo=dict(classes=[ 'Pedestrian', 'Cyclist', 'Car', ]), modality=dict(use_camera=True, use_lidar=True), pipeline=[ dict( backend_args=None, coord_type='LIDAR', load_dim=4, type='LoadPointsFromFile', use_dim=4), dict(backend_args=None, type='LoadImageFromFile'), dict( flip=False, img_scale=( 1280, 720, ), pts_scale_ratio=1, transforms=[ dict(keep_ratio=True, scale=0, type='Resize'), dict( rot_range=[ 0, 0, ], scale_ratio_range=[ 1.0, 1.0, ], translation_std=[ 0, 0, 0, ], type='GlobalRotScaleTrans'), dict(type='RandomFlip3D'), dict( point_cloud_range=[ 0, -40, -3, 70.4, 40, 2, ], type='PointsRangeFilter'), ], type='MultiScaleFlipAug3D'), dict(keys=[ 'points', 'img', ], type='Pack3DDetInputs'), ], test_mode=True, type='KittiDataset'), num_workers=1, sampler=dict(shuffle=False, type='DefaultSampler')) val_evaluator = dict( ann_file='data/kitti/kitti_infos_val.pkl', metric=[ '3d', 'bev', 'bbox', ], pklfile_prefix='work_dirs/kitti_val_preds', type='KittiMetric') vis_backends = [ dict(type='LocalVisBackend'), ] visualizer = dict( name='visualizer', type='Det3DLocalVisualizer', vis_backends=[ dict(type='LocalVisBackend'), ]) voxel_size = [ 0.05, 0.05, 0.1, ] work_dir = './work_dirs/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class'
2025/04/25 16:02:11 - mmengine - INFO - Distributed training is not used, all SyncBatchNorm (SyncBN) layers in the model will be automatically reverted to BatchNormXd layers if they are used.
2025/04/25 16:02:11 - mmengine - INFO - Hooks will be executed in the following order:
before_run:
(VERY_HIGH ) RuntimeInfoHook
(BELOW_NORMAL) LoggerHook
before_train:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(VERY_LOW ) CheckpointHook
before_train_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(NORMAL ) DistSamplerSeedHook
before_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
after_train_iter:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
after_train_epoch:
(NORMAL ) IterTimerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
before_val: (VERY_HIGH ) RuntimeInfoHook
before_val_epoch: (NORMAL ) IterTimerHook
before_val_iter: (NORMAL ) IterTimerHook
after_val_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(BELOW_NORMAL) LoggerHook
after_val_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
(LOW ) ParamSchedulerHook
(VERY_LOW ) CheckpointHook
after_val: (VERY_HIGH ) RuntimeInfoHook
after_train:
(VERY_HIGH ) RuntimeInfoHook
(VERY_LOW ) CheckpointHook
before_test: (VERY_HIGH ) RuntimeInfoHook
before_test_epoch: (NORMAL ) IterTimerHook
before_test_iter: (NORMAL ) IterTimerHook
after_test_iter:
(NORMAL ) IterTimerHook
(NORMAL ) Det3DVisualizationHook
(BELOW_NORMAL) LoggerHook
after_test_epoch:
(VERY_HIGH ) RuntimeInfoHook
(NORMAL ) IterTimerHook
(BELOW_NORMAL) LoggerHook
after_test: (VERY_HIGH ) RuntimeInfoHook
after_run: (BELOW_NORMAL) LoggerHook
2025/04/25 16:02:12 - mmengine - INFO - ------------------------------ 2025/04/25 16:02:12 - mmengine - INFO - The length of the dataset: 21 2025/04/25 16:02:12 - mmengine - INFO - The number of instances per category in the dataset: +------------+--------+ | category | number | +------------+--------+ | Pedestrian | 10 | | Cyclist | 3 | | Car | 47 | +------------+--------+ 2025/04/25 16:02:12 - mmengine - INFO - ------------------------------ 2025/04/25 16:02:12 - mmengine - INFO - The length of the dataset: 6 2025/04/25 16:02:12 - mmengine - INFO - The number of instances per category in the dataset: +------------+--------+ | category | number | +------------+--------+ | Pedestrian | 1 | | Cyclist | 2 | | Car | 15 | +------------+--------+ Name of parameter - Initialization information
pts_voxel_encoder.vfe_layers.0.0.weight - torch.Size([64, 10]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.vfe_layers.0.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.vfe_layers.0.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.vfe_layers.1.0.weight - torch.Size([64, 128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.vfe_layers.1.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.vfe_layers.1.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.0.conv.weight - torch.Size([128, 256, 3, 3]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.0.conv.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.1.conv.weight - torch.Size([128, 256, 3, 3]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.1.conv.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.2.conv.weight - torch.Size([128, 256, 3, 3]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.2.conv.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.3.conv.weight - torch.Size([128, 256, 3, 3]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.3.conv.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.4.conv.weight - torch.Size([128, 256, 3, 3]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.lateral_convs.4.conv.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.img_transform.0.weight - torch.Size([128, 640]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.img_transform.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.img_transform.1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.img_transform.1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.pts_transform.0.weight - torch.Size([128, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.pts_transform.0.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.pts_transform.1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_voxel_encoder.fusion_layer.pts_transform.1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.conv_input.0.weight - torch.Size([16, 3, 3, 3, 128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.conv_input.1.weight - torch.Size([16]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.conv_input.1.bias - torch.Size([16]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer1.0.0.weight - torch.Size([16, 3, 3, 3, 16]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer1.0.1.weight - torch.Size([16]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer1.0.1.bias - torch.Size([16]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.0.0.weight - torch.Size([32, 3, 3, 3, 16]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.0.1.weight - torch.Size([32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.0.1.bias - torch.Size([32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.1.0.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.1.1.weight - torch.Size([32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.1.1.bias - torch.Size([32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.2.0.weight - torch.Size([32, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.2.1.weight - torch.Size([32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer2.2.1.bias - torch.Size([32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.0.0.weight - torch.Size([64, 3, 3, 3, 32]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.0.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.0.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.1.0.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.1.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.1.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.2.0.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.2.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer3.2.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.0.0.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.0.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.0.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.1.0.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.1.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.1.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.2.0.weight - torch.Size([64, 3, 3, 3, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.2.1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.encoder_layers.encoder_layer4.2.1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.conv_out.0.weight - torch.Size([128, 3, 1, 1, 64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.conv_out.1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_middle_encoder.conv_out.1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.0.weight - torch.Size([128, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.0.1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.3.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.0.4.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.4.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.6.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.0.7.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.7.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.9.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.0.10.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.10.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.12.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.0.13.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.13.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.15.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.0.16.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.0.16.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.0.weight - torch.Size([256, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.1.1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.3.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.1.4.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.4.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.6.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.1.7.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.7.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.9.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.1.10.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.10.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.12.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.1.13.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.13.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.15.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_backbone.blocks.1.16.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_backbone.blocks.1.16.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_neck.deblocks.0.0.weight - torch.Size([128, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_neck.deblocks.0.1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_neck.deblocks.0.1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_neck.deblocks.1.0.weight - torch.Size([256, 256, 2, 2]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
pts_neck.deblocks.1.1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_neck.deblocks.1.1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
pts_bbox_head.conv_cls.weight - torch.Size([18, 512, 1, 1]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459
pts_bbox_head.conv_cls.bias - torch.Size([18]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459
pts_bbox_head.conv_reg.weight - torch.Size([42, 512, 1, 1]): NormalInit: mean=0, std=0.01, bias=0
pts_bbox_head.conv_reg.bias - torch.Size([42]): NormalInit: mean=0, std=0.01, bias=0
pts_bbox_head.conv_dir_cls.weight - torch.Size([12, 512, 1, 1]): NormalInit: mean=0, std=0.01, bias=0
pts_bbox_head.conv_dir_cls.bias - torch.Size([12]): NormalInit: mean=0, std=0.01, bias=0
img_backbone.conv1.weight - torch.Size([64, 3, 7, 7]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.conv1.weight - torch.Size([64, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.0.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.conv2.weight - torch.Size([64, 64, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.0.bn2.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.bn2.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.conv3.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.0.bn3.weight - torch.Size([256]): ConstantInit: val=0, bias=0
img_backbone.layer1.0.bn3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.downsample.0.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.0.downsample.1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.0.downsample.1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.1.conv1.weight - torch.Size([64, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.1.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.1.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.1.conv2.weight - torch.Size([64, 64, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.1.bn2.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.1.bn2.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.1.conv3.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.1.bn3.weight - torch.Size([256]): ConstantInit: val=0, bias=0
img_backbone.layer1.1.bn3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.2.conv1.weight - torch.Size([64, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.2.bn1.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.2.bn1.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.2.conv2.weight - torch.Size([64, 64, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.2.bn2.weight - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.2.bn2.bias - torch.Size([64]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer1.2.conv3.weight - torch.Size([256, 64, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer1.2.bn3.weight - torch.Size([256]): ConstantInit: val=0, bias=0
img_backbone.layer1.2.bn3.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.conv1.weight - torch.Size([128, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.0.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.conv2.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.0.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.conv3.weight - torch.Size([512, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.0.bn3.weight - torch.Size([512]): ConstantInit: val=0, bias=0
img_backbone.layer2.0.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.downsample.0.weight - torch.Size([512, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.0.downsample.1.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.0.downsample.1.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.1.conv1.weight - torch.Size([128, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.1.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.1.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.1.conv2.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.1.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.1.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.1.conv3.weight - torch.Size([512, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.1.bn3.weight - torch.Size([512]): ConstantInit: val=0, bias=0
img_backbone.layer2.1.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.2.conv1.weight - torch.Size([128, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.2.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.2.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.2.conv2.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.2.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.2.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.2.conv3.weight - torch.Size([512, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.2.bn3.weight - torch.Size([512]): ConstantInit: val=0, bias=0
img_backbone.layer2.2.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.3.conv1.weight - torch.Size([128, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.3.bn1.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.3.bn1.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.3.conv2.weight - torch.Size([128, 128, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.3.bn2.weight - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.3.bn2.bias - torch.Size([128]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer2.3.conv3.weight - torch.Size([512, 128, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer2.3.bn3.weight - torch.Size([512]): ConstantInit: val=0, bias=0
img_backbone.layer2.3.bn3.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.conv1.weight - torch.Size([256, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.0.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.conv2.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.0.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.conv3.weight - torch.Size([1024, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.0.bn3.weight - torch.Size([1024]): ConstantInit: val=0, bias=0
img_backbone.layer3.0.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.downsample.0.weight - torch.Size([1024, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.0.downsample.1.weight - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.0.downsample.1.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.1.conv1.weight - torch.Size([256, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.1.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.1.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.1.conv2.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.1.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.1.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.1.conv3.weight - torch.Size([1024, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.1.bn3.weight - torch.Size([1024]): ConstantInit: val=0, bias=0
img_backbone.layer3.1.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.2.conv1.weight - torch.Size([256, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.2.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.2.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.2.conv2.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.2.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.2.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.2.conv3.weight - torch.Size([1024, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.2.bn3.weight - torch.Size([1024]): ConstantInit: val=0, bias=0
img_backbone.layer3.2.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.3.conv1.weight - torch.Size([256, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.3.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.3.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.3.conv2.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.3.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.3.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.3.conv3.weight - torch.Size([1024, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.3.bn3.weight - torch.Size([1024]): ConstantInit: val=0, bias=0
img_backbone.layer3.3.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.4.conv1.weight - torch.Size([256, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.4.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.4.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.4.conv2.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.4.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.4.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.4.conv3.weight - torch.Size([1024, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.4.bn3.weight - torch.Size([1024]): ConstantInit: val=0, bias=0
img_backbone.layer3.4.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.5.conv1.weight - torch.Size([256, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.5.bn1.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.5.bn1.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.5.conv2.weight - torch.Size([256, 256, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.5.bn2.weight - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.5.bn2.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer3.5.conv3.weight - torch.Size([1024, 256, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer3.5.bn3.weight - torch.Size([1024]): ConstantInit: val=0, bias=0
img_backbone.layer3.5.bn3.bias - torch.Size([1024]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.conv1.weight - torch.Size([512, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.0.bn1.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.bn1.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.conv2.weight - torch.Size([512, 512, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.0.bn2.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.bn2.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.conv3.weight - torch.Size([2048, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.0.bn3.weight - torch.Size([2048]): ConstantInit: val=0, bias=0
img_backbone.layer4.0.bn3.bias - torch.Size([2048]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.downsample.0.weight - torch.Size([2048, 1024, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.0.downsample.1.weight - torch.Size([2048]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.0.downsample.1.bias - torch.Size([2048]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.1.conv1.weight - torch.Size([512, 2048, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.1.bn1.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.1.bn1.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.1.conv2.weight - torch.Size([512, 512, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.1.bn2.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.1.bn2.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.1.conv3.weight - torch.Size([2048, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.1.bn3.weight - torch.Size([2048]): ConstantInit: val=0, bias=0
img_backbone.layer4.1.bn3.bias - torch.Size([2048]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.2.conv1.weight - torch.Size([512, 2048, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.2.bn1.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.2.bn1.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.2.conv2.weight - torch.Size([512, 512, 3, 3]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.2.bn2.weight - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.2.bn2.bias - torch.Size([512]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_backbone.layer4.2.conv3.weight - torch.Size([2048, 512, 1, 1]): KaimingInit: a=0, mode=fan_out, nonlinearity=relu, distribution =normal, bias=0
img_backbone.layer4.2.bn3.weight - torch.Size([2048]): ConstantInit: val=0, bias=0
img_backbone.layer4.2.bn3.bias - torch.Size([2048]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.lateral_convs.0.conv.weight - torch.Size([256, 256, 1, 1]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.lateral_convs.0.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.lateral_convs.1.conv.weight - torch.Size([256, 512, 1, 1]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.lateral_convs.1.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.lateral_convs.2.conv.weight - torch.Size([256, 1024, 1, 1]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.lateral_convs.2.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.lateral_convs.3.conv.weight - torch.Size([256, 2048, 1, 1]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.lateral_convs.3.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.fpn_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.fpn_convs.0.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.fpn_convs.1.conv.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.fpn_convs.1.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.fpn_convs.2.conv.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.fpn_convs.2.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
img_neck.fpn_convs.3.conv.weight - torch.Size([256, 256, 3, 3]): XavierInit: gain=1, distribution=uniform, bias=0
img_neck.fpn_convs.3.conv.bias - torch.Size([256]):
The value is the same before and after calling init_weights of DynamicMVXFasterRCNN
2025/04/25 16:02:13 - mmengine - INFO - Load checkpoint from mvx_faster_rcnn_detectron2-caffe_20e_coco-pretrain_gt-sample_kitti-3-class_moderate-79.3_20200207-a4a6a3c7.pth
2025/04/25 16:02:13 - mmengine - WARNING - "FileClient" will be deprecated in future. Please use io functions in https://mmengine.readthedocs.io/en/latest/api/fileio.html#file-io
2025/04/25 16:02:13 - mmengine - WARNING - "HardDiskBackend" is the alias of "LocalBackend" and the former will be deprecated in future.
2025/04/25 16:02:13 - mmengine - INFO - Checkpoints will be saved to /home/vishva/egal5/test/version110/mmdetection3d-1.1.0/work_dirs/mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class.
2025/04/25 16:02:22 - mmengine - INFO - Exp name: mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class_20250425_160201
2025/04/25 16:02:22 - mmengine - INFO - Epoch(train) [1][42/42] lr: 4.1081e-04 eta: 0:05:54 time: 0.2163 data_time: 0.0056 memory: 2058 grad_norm: 129.2541 loss: 4.4913 loss_cls: 1.1495 loss_bbox: 3.2114 loss_dir: 0.1304
2025/04/25 16:02:28 - mmengine - INFO - Results of pred_instances_3d:
----------- AP11 Results ------------
Pedestrian [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Pedestrian [email protected], 0.25, 0.25: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.25, 0.25: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Car [email protected], 0.70, 0.70: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Car [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000
Overall AP11@easy, moderate, hard: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000
----------- AP40 Results ------------
Pedestrian [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Pedestrian [email protected], 0.25, 0.25: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.25, 0.25: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Car [email protected], 0.70, 0.70: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Car [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000
Overall AP40@easy, moderate, hard: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000
2025/04/25 16:02:28 - mmengine - INFO - Results of pred_instances_3d:
----------- AP11 Results ------------
Pedestrian [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Pedestrian [email protected], 0.25, 0.25: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.25, 0.25: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Car [email protected], 0.70, 0.70: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Car [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000
Overall AP11@easy, moderate, hard: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000
----------- AP40 Results ------------
Pedestrian [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Pedestrian [email protected], 0.25, 0.25: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.25, 0.25: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Car [email protected], 0.70, 0.70: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Car [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000
Overall AP40@easy, moderate, hard: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000
2025/04/25 16:02:29 - mmengine - INFO - Results of pred_instances_3d:
----------- AP11 Results ------------
Pedestrian [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Pedestrian [email protected], 0.25, 0.25: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.25, 0.25: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Car [email protected], 0.70, 0.70: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000 Car [email protected], 0.50, 0.50: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000
Overall AP11@easy, moderate, hard: bbox AP11:0.0000, 0.0000, 0.0000 bev AP11:0.0000, 0.0000, 0.0000 3d AP11:0.0000, 0.0000, 0.0000
----------- AP40 Results ------------
Pedestrian [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Pedestrian [email protected], 0.25, 0.25: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Cyclist [email protected], 0.25, 0.25: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Car [email protected], 0.70, 0.70: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000 Car [email protected], 0.50, 0.50: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000
Overall AP40@easy, moderate, hard: bbox AP40:0.0000, 0.0000, 0.0000 bev AP40:0.0000, 0.0000, 0.0000 3d AP40:0.0000, 0.0000, 0.0000
2025/04/25 16:02:29 - mmengine - INFO - Epoch(val) [1][6/6] Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP11_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP11_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP11_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP11_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP11_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP11_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_3D_AP11_easy: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_BEV_AP11_easy: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_2D_AP11_easy: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_3D_AP11_moderate: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_BEV_AP11_moderate: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_2D_AP11_moderate: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_3D_AP11_hard: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_BEV_AP11_hard: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_2D_AP11_hard: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_3D_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_BEV_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Pedestrian_2D_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_3D_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_BEV_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Cyclist_2D_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP40_easy_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP40_moderate_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP40_hard_strict: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP40_easy_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP40_moderate_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_3D_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_BEV_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Car_2D_AP40_hard_loose: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_3D_AP40_easy: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_BEV_AP40_easy: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_2D_AP40_easy: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_3D_AP40_moderate: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_BEV_AP40_moderate: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_2D_AP40_moderate: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_3D_AP40_hard: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_BEV_AP40_hard: 0.0000 Kitti metric/pred_instances_3d/KITTI/Overall_2D_AP40_hard: 0.0000 data_time: 0.0156 time: 0.0828 2025/04/25 16:02:36 - mmengine - INFO - Exp name: mvxnet_fpn_dv_second_secfpn_8xb2-80e_kitti-3d-3class_20250425_160201 2025/04/25 16:02:36 - mmengine - INFO - Epoch(train) [2][42/42] lr: 5.2354e-04 eta: 0:05:13 time: 0.1758 data_time: 0.0050 memory: 2033 grad_norm: 44.3818 loss: 3.1512 loss_cls: 1.0053 loss_bbox: 2.0313 loss_dir: 0.1145 2025/04/25 16:02:37 - mmengine - INFO - Results of pred_instances_3d:
this the log file data