delora icon indicating copy to clipboard operation
delora copied to clipboard

Question about the set of epoch

Open ZhengDonglei opened this issue 1 year ago • 3 comments

Dear officer,

Thanks for sharing a great job.I am running the code with the experiments DARPA SubT Challenge Urban Circuit in the paper Self-supervised Learning of LiDAR Odometry for Robotic Applications

Now I start to retrain the model based on your code. I found that the EPOCH you set is 10000. Train one epoch on my 3090 server needs 1.5 hours. It seems that it needs plenty of time if following the set of 10000 epoch. My question is whether the training process needs to be stop when loss decreased not much?

wish you the best

ZhengDonglei avatar Oct 17 '22 02:10 ZhengDonglei

Dear officer,

Thanks for sharing a great job.I am running the code with the experiments DARPA SubT Challenge Urban Circuit in the paper Self-supervised Learning of LiDAR Odometry for Robotic Applications

Now I start to retrain the model based on your code. I found that the EPOCH you set is 10000. Train one epoch on my 3090 server needs 1.5 hours. It seems that it needs plenty of time if following the set of 10000 epoch. My question is whether the training process needs to be stop when loss decreased not much?

wish you the best

Hello, Donglei,

I tried to train 80 epochs and 100 epochs, and the trained neural network did not achieve the expected results. I do not know whether it is the lack of training times or where the error occurred.

How many epochs have you trained? What is the path obtained using the odometry?

Thank you for your reply.

GavyndotLee avatar Oct 23 '22 03:10 GavyndotLee

Hi guys, Thanks for reporting! I am at IROS this week so i won't have time to look into it in large detail. It is already 2 years ago since I worked on it, but I can try to look into the training process after the conference.

Look here for a similar question: https://github.com/leggedrobotics/delora/issues/17 For sure I never trained the model.for.more than 200 epochs. Also the training is not super stable, so after the training has converged the loss remains about the same from.epoch to epoch, but the performance can be different. Also only training on the darpa SubT dataset might be a bit limiting. Is this what you are doing?

I will try to come back to you after the conference.

Best, Julian

nubertj avatar Oct 23 '22 07:10 nubertj

Thank you for timing reply, looking forward to hearing further progress.

GavyndotLee avatar Oct 26 '22 02:10 GavyndotLee