OpenPCDet
OpenPCDet copied to clipboard
Correct way of defining the voxel size for custom dataset
Hi,
I am trying to train a custom dataset using PointPillar. I have created the data according to what is shown on the custom dataset page. However, on training, I am getting the following error:
Exception has occurred: RuntimeError The size of tensor a (995400) must match the size of tensor b (1670400) at non-singleton dimension 1 File "/home/aagr656/OpenPCDet/pcdet/utils/loss_utils.py", line 60, in forward pt = target * (1.0 - pred_sigmoid) + (1.0 - target) * pred_sigmoid File "/home/aagr656/OpenPCDet/pcdet/models/dense_heads/anchor_head_template.py", line 128, in get_cls_layer_loss cls_loss_src = self.cls_loss_func(cls_preds, one_hot_targets, weights=cls_weights) # [N, M] File "/home/aagr656/OpenPCDet/pcdet/models/dense_heads/anchor_head_template.py", line 217, in get_loss cls_loss, tb_dict = self.get_cls_layer_loss() File "/home/aagr656/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 27, in get_training_loss loss_rpn, tb_dict = self.dense_head.get_loss() File "/home/aagr656/OpenPCDet/pcdet/models/detectors/pointpillar.py", line 14, in forward loss, tb_dict, disp_dict = self.get_training_loss() File "/home/aagr656/OpenPCDet/pcdet/models/__init__.py", line 44, in model_func ret_dict, tb_dict, disp_dict = model(batch_dict) File "/home/aagr656/OpenPCDet/tools/train_utils/train_utils.py", line 56, in train_one_epoch loss, tb_dict, disp_dict = model_func(model, batch) File "/home/aagr656/OpenPCDet/tools/train_utils/train_utils.py", line 180, in train_model accumulated_iter = train_one_epoch( File "/home/aagr656/OpenPCDet/tools/train.py", line 176, in main train_model( File "/home/aagr656/OpenPCDet/tools/train.py", line 231, in <module> main() RuntimeError: The size of tensor a (995400) must match the size of tensor b (1670400) at non-singleton dimension 1
I have read all the existing issues related to it and one thing I came to know is that the issue can be solved by adjusting the voxel size and point cloud range. The point cloud range, in my understanding, depends upon the input data, so we can't really do much with it. So what remains is adjusting the voxel size.
Here are the details in the config file I am using: `DATA_CONFIG: BASE_CONFIG: tools/cfgs/dataset_configs/custom_dataset.yaml POINT_CLOUD_RANGE: [-172.8, -172.8, -2, 172.8, 172.8, 38] DATA_PROCESSOR: - NAME: mask_points_and_boxes_outside_range REMOVE_OUTSIDE_BOXES: True
- NAME: shuffle_points
SHUFFLE_ENABLED: {
'train': True,
'test': False
}
- NAME: transform_points_to_voxels
VOXEL_SIZE: [1.44, 1.44, 40]
MAX_POINTS_PER_VOXEL: 960
MAX_NUMBER_OF_VOXELS: {
'train': 32000,
'test': 40000
}`
The voxel size satisfies both the requirements I could identify from the existing issues. The point cloud range along z axis is 40 and point cloud range along X-Y axis/voxel size is a multiple of 16. Still, I am still getting the error. I have tried several other values of the point cloud range and voxel size satisfying the two conditions, but the error still persists. I want to know what is the correct way of defining the voxel size? And, is there anything else I can check to get rid of this error?
Thanks, Ajay