Pointnet_Pointnet2_pytorch
Pointnet_Pointnet2_pytorch copied to clipboard
s3dis data loader:why there is no subtraction process for z dimation?
`selected_points = points[selected_point_idxs, :] # num_point * 6 current_points = np.zeros((self.num_point, 9)) # num_point * 9 current_points[:, 6] = selected_points[:, 0] / self.room_coord_max[room_idx][0] current_points[:, 7] = selected_points[:, 1] / self.room_coord_max[room_idx][1] current_points[:, 8] = selected_points[:, 2] / self.room_coord_max[room_idx][2]
selected_points[:, 0] = selected_points[:, 0] - center[0]
selected_points[:, 1] = selected_points[:, 1] - center[1]
# there is no subtraction process for z dimation???
selected_points[:, 3:6] /= 255.0
current_points[:, 0:6] = selected_points`
I have the same question. do u know why already? if, plz tell me. Many thanks!
In line 180 of train_semseg.py, points are rotated by provider.rotate_point_cloud_z. So u could no do this process.
Also, why is there no normalization done on the pointclouds for semantic segmentation? Normalization is done for modelnet and shapenet but not s3dis dataloader, why?
Also, why is there no normalization done on the pointclouds for semantic segmentation? Normalization is done for modelnet and shapenet but not s3dis dataloader, why?
It done, plz check the line 67 of the Pointnet_Pointnet2_pytorch/data_utils/S3DISDataLoader.py