ORB_SLAM2 icon indicating copy to clipboard operation
ORB_SLAM2 copied to clipboard

EuRoc 's ATE is worse than the paper

Open cheaster opened this issue 8 years ago • 8 comments

Does anyone test the EuRoc? ATE I run is above 6cm, which is worse than the paper's 3cm

cheaster avatar Mar 04 '17 05:03 cheaster

Hi I have the same experience that EUROC errors are larger than in the paper, for all sequences. I don't know why at the moment. But I notice that the author used a modified python script to calculate ATE, as given here https://github.com/raulmur/evaluate_ate_scale ; However, even with this modified python script, I cannot get the error as low as in the paper. Any suggestions are highly appreciated.

IQ17 avatar May 19 '17 15:05 IQ17

Hi,@cheaster @IQ17 How to use EUROC ground-truth and the CameraTrajectory.txt to getthe ate error? tum benchmark can not use directly........ I'm a new guy in SLAM ,and nobody to ask...... Thanks!

nskyzone avatar Jun 21 '17 11:06 nskyzone

Hi @pigbreeder

Actually, the author provides the code to output the trajectory in the format of the TUM benchmark. Please check SLAM.SaveTrajectoryTUM("CameraTrajectory.txt"); there

The TUM benchmark provides python2 script here and here, or you can use the modified version

Then all you need to do is run the python script:)

IQ17 avatar Jun 22 '17 12:06 IQ17

Hi everyone, @pigbreeder see #364 @IQ17 @cheaster how did compute ATE for EuRoC? I used ./evaluate_ate_scale.py MH01/mav0/state_groundtruth_estimate0/data.csv KeyFrameTrajectory.txt (First file is groundtruth, second file is estimated trajectory). Thanks

Toumi0812 avatar Jun 29 '17 15:06 Toumi0812

I've been testing the EuRoC V1_02_medium dataset, and am also seeing worse translation RMSE. I get an error of 0.065 compared with the ORB_SLAM2 paper's 0.020.

I'm using the current, unmodified (except to turn the GUI off) ORB_SLAM2 code. I get similar results using evaluate_ate_scale.py or evaluate_ate.py (as I'd expect for a stereo system!). For me the ORB_SLAM2 output file is CameraTrajectory.txt (not KeyFrameTrajectory.txt mentioned by @Toumi0812 ). I modified associate.py slightly to support scaling timestamps, and scale the data.csv timestamp by 1e-9 to match the CameraTrajectory.txt timestamps.

Results:

evaluate_ate.py --verbose ~/Downloads/mav0/state_groundtruth_estimate0/data.csv CameraTrajectory.txt compared_pose_pairs 1593 pairs absolute_translational_error.rmse 0.064770 m absolute_translational_error.mean 0.062399 m absolute_translational_error.median 0.062048 m absolute_translational_error.std 0.017366 m absolute_translational_error.min 0.014989 m absolute_translational_error.max 0.112062 m

(with evaluate_ate_scale.pty I get a scale of 1.011233 and absolute_translational_error.rmse of 0.061614 m)

Any suggestions to obtain the paper's results would be welcome - I would like to make sure I'm not missing something!

mattmyne avatar Jul 13 '17 17:07 mattmyne

@mattmyne, How did u get CameraTrajectory.txt? I have as output only KeyFrameTrajectory.txt (just 100/200 keyframes). How many frames poses in CameraTrajectory.txt?

Thanks

Toumi0812 avatar Jul 13 '17 17:07 Toumi0812

@Toumi0812 stereo_euroc.cc in the Examples/Stereo directory calls SLAM.SaveTrajectoryTUM("CameraTrajectory.txt") as it's last function before returning. SLAM is a System class. For the V1_02_medium dataset 1612 poses are exported. How are you generating KeyFrameTrajectory.txt and are the RMSE values closer to the paper for these?

mattmyne avatar Jul 13 '17 21:07 mattmyne

I have same problem with V1_02_medium dataset. Does this problem solved? If it's solved, please let me know the solution.

Thanks

ghost avatar Apr 28 '22 16:04 ghost