SoftGroup
SoftGroup copied to clipboard
Fine recognition of point cloud
When I use indoor high-density point clouds, the recognition effect of those small objects is not very good. How can I improve the final recognition effect?
set instance_voxel_cfg -> spatial_shape smaller? set instance_voxel_cfg -> scale bigger? set grouping_cfg -> radius smaller? set grouping_cfg -> mean_active bigger? set voxel_cfg -> scale bigger? set voxel_cfg -> spatial_shape smaller?
How about per class mIoU of semantic segmentation? Is mIoU for small object classes good?
The panoramic segmentation effect is good, but the instance segmentation effect is obviously poor. I don't know if the object is too small
model: channels: 32 num_blocks: 7 semantic_classes: 6 instance_classes: 6 sem2ins_classes: [] semantic_only: False # T为预训练 ignore_label: -100 grouping_cfg: score_thr: 0.2 radius: 0.04 # 用于分组的K近邻的搜索半径。 越精细越小 与scale有关 mean_active: 500 # 用于约束K-NN后的总大小 越精细越大 与scale有关 # 50:300 100:500 class_numpoint_mean: [221256, 290479, 303281, 7408, 22261, 18097] npoint_thr: 0.05 # absolute if class_numpoint == -1, relative if class_numpoint != -1 ignore_classes: [] instance_voxel_cfg: scale: 100 # 50:2cm 100:1cm 200:5mm spatial_shape: 20 # 不动 train_cfg: max_proposal_num: 200 pos_iou_thr: 0.5 test_cfg: x4_split: False # 推理是否将场景分成4部分,然后合并结果,适用于非常大的点云 cls_score_thr: 0.001 mask_score_thr: -0.5 min_npoint: 100 # min number of points for each instance fixed_modules: ['input_conv', 'unet', 'output_layer', 'semantic_linear', 'offset_linear']
data: train: type: 'house_dataset' data_root: 'train_data/house_dataset10' prefix: '' suffix: '' training: True repeat: 4 voxel_cfg: scale: 100 # 50:2cm 100:1cm 200:5mm spatial_shape: [256, 1024] # 随机裁剪后整个场景的最小和最大空间形状 max_npoint: 5000000 min_npoint: 100 test: type: 'house_dataset' data_root: 'train_data/house_dataset10' prefix: '' suffix: '' training: False voxel_cfg: scale: 100 spatial_shape: [256, 1024] # 50:[128, 512] 100:[256, 1024] max_npoint: 5000000 min_npoint: 100
dataloader: train: batch_size: 4 # 4 num_workers: 4 test: batch_size: 1 num_workers: 1
optimizer: type: 'AdamW' lr: 0.004 # 0.004
save_cfg: semantic: True offset: True instance: True
fp16: True epochs: 128 step_epoch: 50 save_freq: 4 pretrain: 'work_dirs/softgroup_house/last_epoch.pth' work_dir: ''
can you show the GT also?
semantic_labels and instance_labels
the black points are invalid,should I add a forecast category?
You can ignore the prediction in blackpoint area. Maybe you can increase the instance confidence score threshold.
cls_score_thr: 0.001
this setting.
How can I optimize the recognition the small facets in instance segmentation
In general, the framework already adaptively segment instance with different size based on class_numpoint_mean
.
x4_split will affect the prediction accuracy?
If the number of points per scan is not large (<1M), x4_split is not required.
But I have some big data,can it be converted to CPU calculation?
Hi @thangvubk @mcmingchang ,
According to https://github.com/thangvubk/SoftGroup/blob/11dcbfd74b7660a2b82ac6473af107849c7d545f/docs/config_explanation.md, we can see that instance_voxel_cfg -> spatial_shape
means the dimension of instance in terms of voxels, i.e., H, W, D of instance will be 20 voxels.
I would like to ask what we should do if we have many sizes of objects in this scene. For different types of objects, their H, W, and D may not be 20 voxels.