atom icon indicating copy to clipboard operation
atom copied to clipboard

ATOM alternative methods comparisons

Open miguelriemoliveira opened this issue 11 months ago • 17 comments

The idea is to develop several alternative comparisons using implementations from the state of the art "converted" to atom, i.e., that generate an atom dataset which can be evaluated like our calibrations are.

Some alternatives:

  • [x] https://github.com/hku-mars/livox_camera_calib (targetless -> will not use)
  • [ ] opencv hand-eye camera calibration (#912)
  • [ ] opencv stereo camera calibration (Implemented, to be used as canonical example for the others) (#938)
  • [ ] RWHE-Calib (Ali et. al, MATLAB code) (#939)
  • [ ] kalibr (already implemented) (#938)
  • [ ] https://github.com/mikeferguson/robot_calibration
  • [ ] https://github.com/PJLab-ADG/SensorsCalibration (lidar2camera #913)
  • [ ] https://github.com/PJLab-ADG/SensorsCalibration (lidar2lidar #914)
  • [ ] https://github.com/IFL-CAMP/easy_handeye.git

Please add more that you can think of.

miguelriemoliveira avatar Mar 21 '24 10:03 miguelriemoliveira

Hi @miguelriemoliveira and @manuelgitgomes! Quick question about the first of these alternatives to ATOM: Livox is a targetless calibration method. As such, can we really compare it to ATOM and, perhaps more importantly, does it even make sense to compare these results? I assume targetless calibration is used for different situations than what we usually have with ATOM (at least most targetless calibration papers I've read dealt with situations where no patterns were available, like roadside LiDARs).

Kazadhum avatar Mar 21 '24 14:03 Kazadhum

As a note, I think we might encounter a problem later down the line with the OpenCV calibration. The OpenCV method does not "accept" partial detections and we only have 5 collections in our real riwmpbot dataset with non-partial detections of the the hand_pattern.

I don't think we should worry about this for now and instead focus on implementing these calibration alternatives, but it might be good to keep in mind to be able to plan ahead.

Tagging @miguelriemoliveira and @manuelgitgomes for visibility.

Kazadhum avatar Mar 21 '24 18:03 Kazadhum

Hi @Kazadhum ,

Livox is a targetless calibration method. As such, can we really compare it to ATOM and, perhaps more importantly, does it even make sense to compare these results? I assume targetless calibration is used for different situations than what we usually have with ATOM (at least most targetless calibration papers I've read dealt with situations where no patterns were available, like roadside LiDARs).

You are right. It does not make sense to compare with targetless approaches or at least we can confidently say that at this stage we should first spend some time searching for target based methods.

miguelriemoliveira avatar Mar 22 '24 15:03 miguelriemoliveira

As a note, I think we might encounter a problem later down the line with the OpenCV calibration. The OpenCV method does not "accept" partial detections and we only have 5 collections in our real riwmpbot dataset with non-partial detections of the the hand_pattern.

I don't think we should worry about this for now and instead focus on implementing these calibration alternatives, but it might be good to keep in mind to be able to plan ahead.

Right. We had this limitation from other approaches already for example when we used the opencv stereo camera calibration. Most of these methods use chessboards as patterns, and these do not support partial detections.

miguelriemoliveira avatar Mar 22 '24 15:03 miguelriemoliveira

Right. We had this limitation from other approaches already for example when we used the opencv stereo camera calibration. Most of these methods use chessboards as patterns, and these do not support partial detections.

I see. I think in order to test these alternatives during their implementation we can use datasets from simulated systems. Then, when they are working correctly, we can get new real datasets with more non-partial detections. What do you think?

Kazadhum avatar Mar 22 '24 15:03 Kazadhum

I see. I think in order to test these alternatives during their implementation we can use datasets from simulated systems. Then, when they are working correctly, we can get new real datasets with more non-partial detections. What do you think?

That's it. For datasets (real or sim) that are meant to be used by other approaches for comparison we need to make sure we have "enough" non-partial detections.

miguelriemoliveira avatar Mar 22 '24 18:03 miguelriemoliveira

Hi @Kazadhum and @manuelgitgomes ,

please create an issue for each method you start working on, and add the issue number on the checklist above.

miguelriemoliveira avatar Mar 27 '24 19:03 miguelriemoliveira

About lidar to camera calibration, this is useful (#915).

miguelriemoliveira avatar Apr 01 '24 08:04 miguelriemoliveira

Hi @miguelriemoliveira! Since I haven't worked on this for the past few weeks, I'm now checking that these methods work properly before running batch calibrations, starting with the OpenCV method for eye-to-hand calibration.

Running: rosrun atom_evaluation cv_eye_to_hand.py -json $ATOM_DATASETS/riwmpbot_real/merged/dataset.json -c rgb_world -p hand_pattern -hl flange -bl base_link -ctgt -uic

we get:

Deleted collections: ['001', '036', '037', '038']: at least one detection by a camera should be present.
After filtering, will use 59 collections: ['000', '002', '003', '004', '005', '006', '007', '008', '009', '010', '011', '012', '013', '014', '015', '016', '017', '018', '019', '020', '021', '022', '023', '024', '025', '026', '027', '028', '029', '030', '031', '032', '033', '034', '035', '039', '040', '041', '042', '043', '044', '045', '046', '047', '048', '049', '050', '051', '052', '053', '054', '055', '056', '057', '058', '059', '060', '061', '062']
Selected collection key is 000
Ground Truth b_T_c=
[[ 0.     0.259 -0.966  0.95 ]
 [ 1.     0.     0.     0.35 ]
 [ 0.    -0.966 -0.259  0.8  ]
 [ 0.     0.     0.     1.   ]]
estimated b_T_c=
[[ 0.114  0.186 -0.976  0.912]
 [ 0.993 -0.057  0.105  0.328]
 [-0.036 -0.981 -0.191  0.79 ]
 [ 0.     0.     0.     1.   ]]
Etrans = 5.079 (mm)
Erot = 4.393 (deg)
+----------------------+-------------+---------+----------+-------------+------------+
|      Transform       | Description | Et0 [m] |  Et [m]  | Rrot0 [rad] | Erot [rad] |
+----------------------+-------------+---------+----------+-------------+------------+
| world-rgb_world_link |  rgb_world  |   0.0   | 0.019314 |     0.0     |  0.077225  |
+----------------------+-------------+---------+----------+-------------+------------+
Saved json output file to /home/diogo/atom_datasets/riwmpbot_real/merged/hand_eye_tsai_rgb_world.json.

I wanted your opinion on if these values are plausible or if they are indicative of something not working correctly. They seem a bit high to me, but it is expected that ATOM yields better results...

For the record, this method yields good results in the simulated cases, so I'm inclined to believe these values are valid.

Kazadhum avatar Jun 12 '24 10:06 Kazadhum

Hi @Kazadhum ,

My validation test for algorithms in using simulated data. If the algorithm works well with simulated data, then it should be ok. If the results in real data are not as good as expected, I would say that's due to the usual problems with real data.

So, bottomline, results seem fine to me.

miguelriemoliveira avatar Jun 12 '24 18:06 miguelriemoliveira

Hi @miguelriemoliveira! That makes sense, thanks!

In that case, all the OpenCV methods work properly and return OK results in the calibration of the real riwmpbot. I'm now debugging the Li and Shah methods...

Kazadhum avatar Jun 13 '24 08:06 Kazadhum

Hello @miguelriemoliveira and @manuelgitgomes!

I was running batch executions of the real riwmpbot calibrations using the 5 OpenCV methods and I noticed something about the process_results script.

This script assumes a certain structure of the CSV results files which works great with the ATOM calibration, but not so much with other calibrations. Namely, it doesn't work for CSV files which don't have a Collection # column or an Averages row.

So I can do two things. The first, and perhaps the most expeditious would be to chenge the OpenCV calibration script to output a CSV results file which conforms to the ATOM structure, maybe with a single "Collection" and an "Averages" row. Personally, I don't think it makes sense to do this, but it is a quick fix to this specific problem.

What I feel is the preferrable solution here is to rework the process_results script to become agnostic to the specific structure of the CSV files.

What do you think, @miguelriemoliveira and @manuelgitgomes?

Kazadhum avatar Jun 20 '24 13:06 Kazadhum

A small correction to my previous comment is that it does not, in fact, need to have a row with the name "Averages" since we can specify the name of the needed row with a flag.

Kazadhum avatar Jun 20 '24 14:06 Kazadhum

I got it to work by adding the flag:

https://github.com/lardemua/atom/blob/f25c302efde9598f5e753f9c49d3e287129197bb/atom_batch_execution/scripts/process_results#L58-L59

And I changed the instances of 'Collection #' in the code with this argument. This effectively works, but maybe some variable/argument renaming is warranted. I'll describe the batch executions run and their results in a separate issue.

Kazadhum avatar Jun 20 '24 14:06 Kazadhum

Great. Congrats!

miguelriemoliveira avatar Jun 20 '24 14:06 miguelriemoliveira

Thank you @miguelriemoliveira! In the meantime I realized that these results aren't representative of the actual errors, since the reprojection error for the real case of the riwmpbot_real system isn't implemented in the OpenCV calibration script. What I can do is to run these experiments for the simulated system, which are results we will also need. I'll work on implementing these comparisons for the real systems in the meantime.

Kazadhum avatar Jun 20 '24 14:06 Kazadhum

OK, sounds good.

miguelriemoliveira avatar Jun 21 '24 08:06 miguelriemoliveira