delora
delora copied to clipboard
reproduce the results of DARPA
Dear officer,
Thanks for sharing a great job.I am running the code with the experiments DARPA SubT Challenge Urban Circuit in the paper Self-supervised Learning of LiDAR Odometry for Robotic Applications
First I trained a network base on the Alpha course for around 80 epochs with a loss of 0.077586
Then I use the Beta to infer the results
seems not very close to the paper
I also try to use a LOAM-like method, it can output a map so I think the data I am using is correct.
My question is,
- Does the trained model is not good? Can you share your model on DARPA for testing?
- Do I need to add the mapping module of LOAM before evaluating the results of DARPA?
HI @DarrenWong, Sorry for the late reply! I am super busy these days with multiple deliverables and ICRA 2022 coming up.
- I will look into it after ICRA, promise! I will retrain a model for you and send you the instructions on how to reproduce the results from the paper.
- Yes, the results in the paper were achieved with the LOAM mapping module. Only the scan2scan has been replaced.
Best, Julian
@nubertj Thanks for your kind response. Look forward to your updates and the latest work in ICRA 2022. You can send me by email: [email protected]
Hi @nubertj . I encountered a similar problem. Whether or not I modify the parameters in the config, the trained network always makes mistakes in predicting rotation. I want to locate the problem, and can you provide model parameters with the experiments DARPA SubT Challenge Urban Circuit?
Thanks for your help!
HI @GavyndotLee, **How did you generate the map from the screenshot?
- If it is the validation set, I assume you already integrated into the scan2map module?
- If not, I assume it is the training set?**
Independent Note:
I can share the model, but I did some pretty big adjustments in the code before open sourcing it, so I would need to retrain the model. As I am currently doing an internship in the USA it is not so easy to find the time for that, but I can try.
But in general to make sure: Clearly the generalization capability of the network depends on the amount of training data you use (as always). Also the training procedure is not the most stable one, so the performance (in particular on kitti) can vary quite a bit from epoch to epoch. If I really only trained on the Darpa SubT run, e.g. the alpha course, the performance on the alpha course was pretty good, but the performance on the beta course was not great, when only looking at the scan2scan output. For this reason, but also for the reason that scan2scan alone barely works in practice (also the LOAM scan2scan produces a terrible overall trajectory), we integrated it with the san2map.
If nice generalization is needed, I fear that simply more data needs to be used for training, compared to the output on kitti.
Best, Julian
HI @nubertj, I'm sorry to bother you, but I do need your help. I have integrated scan2scan into the scan2map module of LOAM based on frame convention in ROS (X-forward, Y-left, and Z-up). The map as shown was generated from the validation set.
In addition, even if I use the training set, the prediction is not satisfactory, as shown in the following figure.
To eliminate the problem of scan2map, I supplement the running results of the LOAM method on the training set and validation set.