delora icon indicating copy to clipboard operation
delora copied to clipboard

reproduce the results of DARPA

Open DarrenWong opened this issue 2 years ago • 5 comments

Dear officer,

Thanks for sharing a great job.I am running the code with the experiments DARPA SubT Challenge Urban Circuit in the paper Self-supervised Learning of LiDAR Odometry for Robotic Applications

First I trained a network base on the Alpha course for around 80 epochs with a loss of 0.077586 image

Then I use the Beta to infer the results darpa_beta_map_darpa_00_2d darpa_beta_map_darpa_00_3d

seems not very close to the paper

image

I also try to use a LOAM-like method, it can output a map so I think the data I am using is correct. result_legoloam

My question is,

  1. Does the trained model is not good? Can you share your model on DARPA for testing?
  2. Do I need to add the mapping module of LOAM before evaluating the results of DARPA?

DarrenWong avatar Apr 24 '22 03:04 DarrenWong

HI @DarrenWong, Sorry for the late reply! I am super busy these days with multiple deliverables and ICRA 2022 coming up.

  1. I will look into it after ICRA, promise! I will retrain a model for you and send you the instructions on how to reproduce the results from the paper.
  2. Yes, the results in the paper were achieved with the LOAM mapping module. Only the scan2scan has been replaced.

Best, Julian

nubertj avatar May 19 '22 19:05 nubertj

@nubertj Thanks for your kind response. Look forward to your updates and the latest work in ICRA 2022. You can send me by email: [email protected]

DarrenWong avatar May 20 '22 02:05 DarrenWong

Hi @nubertj . I encountered a similar problem. Whether or not I modify the parameters in the config, the trained network always makes mistakes in predicting rotation. I want to locate the problem, and can you provide model parameters with the experiments DARPA SubT Challenge Urban Circuit? fig

Thanks for your help!

GavyndotLee avatar Nov 14 '22 08:11 GavyndotLee

HI @GavyndotLee, **How did you generate the map from the screenshot?

  1. If it is the validation set, I assume you already integrated into the scan2map module?
  2. If not, I assume it is the training set?**

Independent Note:

I can share the model, but I did some pretty big adjustments in the code before open sourcing it, so I would need to retrain the model. As I am currently doing an internship in the USA it is not so easy to find the time for that, but I can try.

But in general to make sure: Clearly the generalization capability of the network depends on the amount of training data you use (as always). Also the training procedure is not the most stable one, so the performance (in particular on kitti) can vary quite a bit from epoch to epoch. If I really only trained on the Darpa SubT run, e.g. the alpha course, the performance on the alpha course was pretty good, but the performance on the beta course was not great, when only looking at the scan2scan output. For this reason, but also for the reason that scan2scan alone barely works in practice (also the LOAM scan2scan produces a terrible overall trajectory), we integrated it with the san2map.

If nice generalization is needed, I fear that simply more data needs to be used for training, compared to the output on kitti.

Best, Julian

nubertj avatar Nov 16 '22 01:11 nubertj

HI @nubertj, I'm sorry to bother you, but I do need your help. I have integrated scan2scan into the scan2map module of LOAM based on frame convention in ROS (X-forward, Y-left, and Z-up). The map as shown was generated from the validation set.

In addition, even if I use the training set, the prediction is not satisfactory, as shown in the following figure. Fig1

To eliminate the problem of scan2map, I supplement the running results of the LOAM method on the training set and validation set. Fig2 Fig3

GavyndotLee avatar Nov 16 '22 07:11 GavyndotLee