RandLA-Net
RandLA-Net copied to clipboard
Performance on ScanNet
Hello @QingyongHu,
Thank you for sharing your work! You mentioned in a previous issue that you'll release details on ScanNet and other datasets such NPM3D. Can you please give some hints on the performance RandLA-Net on ScanNet? What are the preprocessing and config details you propose? Are they similar to S3DIS?
I did give it a try myself, I preprocessed the original mesh data with mesh rasterization to create finer clouds followed by grid subsampling of size 0.01 and then of size 0.04 (as in KPConv), and adapted the dataloader of RandLANet on S3DIS to ScanNet. I reach a mean iou of 61 on the default validation set. Is it comparable to what you get?
Thank you in advance for your reply.
@HaniItani
Hey, I am also using ScanNet to do segmentation. Could you tell me the reason about creating a rasterize_mesh (finer point clouds) with very number of points for ScanNet training ? Why don't we just use the raw point cloud for training?
Best
Hello @ZhengdiYu,
ScanNet is in mesh format. Comparing point density of S3DIS to ScanNet mesh vertices' density suggests that ScanNet is relatively sparse. I think rasterizing the mesh populates the mesh covered volumes by points, and consequently solves for the sparsity issue. You can still train on mesh vertices, but the performance will be compromised. You may have to tune the downsampling rate in the encoder if you plan on trying it.
Best regards, Hani
Hey @HaniItani
Thank you so much for your quick reply! It makes sense. Have you ever tried trained on raw point cloud? By the way, have you tried the original KPConv on ScanNet? What do you think of the performance compared to RandLA-Net?
Best, Zhengdi
Hello @QingyongHu,
Thank you for sharing your work! You mentioned in a previous issue that you'll release details on ScanNet and other datasets such NPM3D. Can you please give some hints on the performance RandLA-Net on ScanNet? What are the preprocessing and config details you propose? Are they similar to S3DIS?
I did give it a try myself, I preprocessed the original mesh data with mesh rasterization to create finer clouds followed by grid subsampling of size 0.01 and then of size 0.04 (as in KPConv), and adapted the dataloader of RandLANet on S3DIS to ScanNet. I reach a mean iou of 61 on the default validation set. Is it comparable to what you get?
Thank you in advance for your reply.
@HaniItani Hi, I trained RandLA on Scannet with raw point cloud recently, and I only got the 46 mIoU on default validation set. It was a gap between the benchmark results. I am not sure if it is correct. Did you try to train RandLA with raw point cloud instead of mesh? Btw, could you tell me more details about your results? Thank you very much.