Pointnet_Pointnet2_pytorch icon indicating copy to clipboard operation
Pointnet_Pointnet2_pytorch copied to clipboard

It takes a long time to train semantic segmentation models

Open moniec1717 opened this issue 1 year ago • 4 comments

Screenshot 2024-10-14 161459 when I train my own dataset (point clouds of bridges and roads), it takes 15h for just one epoch. How can I improve my model? Which parameters should I change? I have already tried changing the parameter stride and block_size, but it didn't work on my dataset. Can anyone help me out, thanks a lot!

moniec1717 avatar Oct 14 '24 14:10 moniec1717

Are you training on GPU? You can increase batch_size

kvta1 avatar Oct 18 '24 07:10 kvta1

Are you training on GPU? You can increase batch_size

I've adjusted the batch size, but no great improvement. I thought maybe the GPU wasn't being used, but after checking the code and seeing that my computer’s memory did use it, I'm not really sure where else things could be improved.

moniec1717 avatar Oct 21 '24 08:10 moniec1717

@moniec1717 have you ever solved this issue?

jianuo1128 avatar Apr 25 '25 00:04 jianuo1128

@moniec1717 have you ever solved this issue?

No, I haven't yet identified the issue with the models. As my main method for my paper is based on KPConv, I decided not to continue training the datasets using Pointnet.

moniec1717 avatar Apr 25 '25 08:04 moniec1717

The code for PointNet and PointNet++ is indeed quite outdated. Consider trying newer models such as X-3D or DeLA. These newer models may offer advantages in performance and efficiency.

lenovo-tyy avatar Nov 12 '25 02:11 lenovo-tyy