Pointnet_Pointnet2_pytorch
Pointnet_Pointnet2_pytorch copied to clipboard
It takes a long time to train semantic segmentation models
when I train my own dataset (point clouds of bridges and roads), it takes 15h for just one epoch.
How can I improve my model? Which parameters should I change?
I have already tried changing the parameter stride and block_size, but it didn't work on my dataset.
Can anyone help me out, thanks a lot!
Are you training on GPU? You can increase batch_size
Are you training on GPU? You can increase batch_size
I've adjusted the batch size, but no great improvement. I thought maybe the GPU wasn't being used, but after checking the code and seeing that my computer’s memory did use it, I'm not really sure where else things could be improved.
@moniec1717 have you ever solved this issue?
@moniec1717 have you ever solved this issue?
No, I haven't yet identified the issue with the models. As my main method for my paper is based on KPConv, I decided not to continue training the datasets using Pointnet.
The code for PointNet and PointNet++ is indeed quite outdated. Consider trying newer models such as X-3D or DeLA. These newer models may offer advantages in performance and efficiency.