atom
atom copied to clipboard
Re-running the zau_bot e5 calibration evaluations after evaluation fix
Brief Explanation
Hello, I'm re-running the calibration evaluation process done by @JorgeFernandes-Git for the zau_bot system, since an error was discovered in the evaluation scripts that didn't take into account that the base of the robot might not be static. I'll be displaying the results here and editing this entry as I do it.
Tagging @miguelriemoliveira and @manuelgitgomes for visibility.
Evaluation 1: Eye-on-hand camera to AGV camera evaluation using different datasets
Original Evaluation Results: https://github.com/JorgeFernandes-Git/zau_bot/blob/main/e5_DualRGB_arm2agv/results.md#eye-on-hand-camera-to-agv-camera-evaluation-using-different-datasets
Updated Results:
Collection # | RMS (pix) | X err (pix) | Y err (pix) | Trans (mm) | Rot (deg) |
---|---|---|---|---|---|
000 | 4.2970 | 0.3405 | 4.2550 | 8.3095 | 0.1325 |
001 | 5.1290 | 0.3997 | 5.0952 | 8.0873 | 0.0090 |
002 | 5.0359 | 0.5097 | 5.0079 | 7.9189 | 0.1865 |
003 | 4.8790 | 0.6295 | 4.8352 | 7.7104 | 0.2015 |
004 | 4.8594 | 0.6808 | 4.8072 | 7.6876 | 0.1431 |
005 | 3.7942 | 0.6030 | 3.7425 | 7.5620 | 0.1439 |
006 | 3.8092 | 0.6736 | 3.7455 | 7.6478 | 0.1906 |
007 | 3.8431 | 0.7069 | 3.7744 | 7.6142 | 0.1142 |
008 | 3.9226 | 0.5874 | 3.8658 | 7.9028 | 0.1309 |
009 | 4.1469 | 0.6288 | 4.0953 | 7.7731 | 0.0792 |
010 | 4.4025 | 0.7136 | 4.3272 | 7.7614 | 0.1315 |
011 | 4.4028 | 0.6313 | 4.3522 | 7.6897 | 0.1168 |
012 | 4.3553 | 0.7011 | 4.2928 | 7.5749 | 0.1411 |
013 | 4.3922 | 0.6872 | 4.3331 | 7.5342 | 0.1548 |
014 | 4.3820 | 0.7116 | 4.3179 | 7.6359 | 0.0983 |
015 | 4.5838 | 0.7206 | 4.5072 | 8.1908 | 0.0745 |
016 | 5.1825 | 0.7394 | 5.1223 | 8.0084 | 0.1719 |
017 | 5.3620 | 0.8752 | 5.2795 | 7.8164 | 0.1094 |
018 | 5.3566 | 0.9441 | 5.2579 | 7.7650 | 0.0446 |
019 | 5.2735 | 0.9461 | 5.1812 | 7.6135 | 0.1319 |
020 | 5.2612 | 0.9740 | 5.1635 | 7.6253 | 0.0843 |
021 | 5.2316 | 0.9561 | 5.1380 | 7.5696 | 0.0646 |
022 | 3.9125 | 0.8064 | 3.8219 | 7.7949 | 0.0670 |
023 | 3.9693 | 0.8330 | 3.8744 | 7.7122 | 0.1407 |
024 | 4.1464 | 0.3138 | 4.1233 | 7.7261 | 0.0990 |
025 | 4.0509 | 0.4438 | 4.0164 | 7.4180 | 0.1431 |
026 | 4.0850 | 0.3196 | 4.0425 | 7.4047 | 0.1513 |
027 | 3.4979 | 0.3026 | 3.4749 | 7.6225 | 0.1124 |
028 | 3.7293 | 0.4474 | 3.6680 | 8.3728 | 0.1837 |
029 | 3.5429 | 0.4787 | 3.5035 | 8.2356 | 0.1947 |
030 | 3.5938 | 0.5283 | 3.5458 | 7.7715 | 0.1805 |
031 | 3.2003 | 0.3337 | 3.1761 | 7.6778 | 0.2073 |
032 | 3.1217 | 0.4612 | 3.0770 | 7.6848 | 0.2344 |
033 | 3.1344 | 0.4805 | 3.0863 | 7.5938 | 0.2313 |
034 | 3.0524 | 0.4825 | 3.0058 | 7.3678 | 0.1688 |
035 | 3.1910 | 0.4860 | 3.1471 | 7.8962 | 0.1781 |
036 | 3.8165 | 0.6249 | 3.7613 | 7.9513 | 0.0863 |
037 | 3.7765 | 0.6817 | 3.7102 | 7.9007 | 0.0572 |
038 | 3.7423 | 0.6187 | 3.6888 | 7.7396 | 0.0700 |
039 | 3.7321 | 0.6349 | 3.6749 | 7.7103 | 0.0330 |
040 | 3.6841 | 0.5634 | 3.6383 | 7.5395 | 0.0398 |
041 | 3.7022 | 0.6644 | 3.6377 | 7.5554 | 0.0570 |
042 | 4.9808 | 1.1351 | 4.8472 | 7.8246 | 0.0343 |
043 | 4.8175 | 0.7967 | 4.7309 | 7.8087 | 0.0723 |
044 | 4.6825 | 0.7267 | 4.6222 | 7.4204 | 0.1447 |
045 | 4.7827 | 0.6340 | 4.7372 | 7.7184 | 0.0803 |
046 | 4.7652 | 0.6465 | 4.7171 | 7.7904 | 0.1585 |
047 | 4.7616 | 0.6516 | 4.7129 | 7.6877 | 0.1204 |
048 | 4.9020 | 0.5831 | 4.8634 | 8.0089 | 0.0298 |
049 | 4.8504 | 3.0159 | 3.7903 | 9.1616 | 0.2908 |
050 | 4.1035 | 0.4857 | 4.0668 | 7.8522 | 0.0727 |
051 | 4.6504 | 0.6057 | 4.6052 | 8.0160 | 0.1071 |
052 | 4.5710 | 0.4923 | 4.5366 | 8.0846 | 0.0355 |
053 | 3.5598 | 0.4147 | 3.5104 | 7.7362 | 0.0229 |
054 | 3.4683 | 0.4085 | 3.4057 | 7.5841 | 0.0786 |
055 | 4.2450 | 0.5846 | 4.1697 | 7.6405 | 0.0566 |
056 | 4.1449 | 0.4869 | 4.0876 | 7.4285 | 0.0926 |
057 | 4.1083 | 0.4781 | 4.0554 | 7.5900 | 0.0519 |
058 | 4.1687 | 0.5405 | 4.1078 | 7.5275 | 0.0646 |
059 | 4.2983 | 0.4256 | 4.2373 | 7.7021 | 0.0363 |
060 | 3.2163 | 0.2646 | 3.1942 | 7.8368 | 0.1051 |
061 | 3.2712 | 0.2891 | 3.2536 | 7.7957 | 0.0749 |
062 | 3.4669 | 0.2651 | 3.4252 | 7.9931 | 0.1429 |
063 | 3.3196 | 0.3011 | 3.3004 | 8.1507 | 0.0270 |
064 | 3.2788 | 0.3176 | 3.2439 | 8.0924 | 0.0270 |
065 | 3.1112 | 0.3533 | 3.0750 | 7.5378 | 0.1142 |
066 | 3.0896 | 0.3849 | 3.0506 | 7.5781 | 0.0420 |
067 | 3.0952 | 0.3970 | 3.0647 | 7.5289 | 0.1160 |
068 | 3.1961 | 0.2887 | 3.1781 | 7.7529 | 0.0912 |
Averages | 4.1085 | 0.6064 | 4.0400 | 7.7757 | 0.1099 |
Evaluation 2: Ground truth frame evaluations using different datasets
Original Evaluation Results: https://github.com/JorgeFernandes-Git/zau_bot/blob/main/e5_DualRGB_arm2agv/results.md#ground-truth-frame-evaluations-using-different-datasets
Updated Results:
Transformation # | X (mm) | Y (mm) | Z (mm) | Roll (deg) | Pitch (deg) | Yaw (deg) | Trans (mm) | Rot (deg) |
---|---|---|---|---|---|---|---|---|
base_link_mb-base_link | 0.2551 | 1.0701 | 7.5430 | 0.0151 | 0.0013 | 0.0028 | 7.6228 | 0.0154 |
camera_link-camera_rgb_frame | 1.4913 | 24.2156 | 6.4357 | 0.0168 | 0.0303 | 0.0481 | 25.1408 | 0.0594 |
camera_mb_link-camera_mb_rgb_frame | 0.4289 | 24.2033 | 7.9438 | 0.0048 | 0.0500 | 0.0564 | 25.4772 | 0.0755 |
Hello @Kazadhum!
Thank you for your work! What did you change from Jorge's results? Did you only re-ran them with the new version of ATOM? Can you show the residuals? What about the ground truth evaluations?
I'm saying this because these results seem very bad when compared with softbot2, so I believe something might not be fully correct.
Hi @Kazadhum,
I have a feeling @manuelgitgomes is correct. This is simulated, right? We should be having better results...
You can meet to discuss the results, and if you want I can join.
Hello @manuelgitgomes !
Thank you for you work! What did you change from Jorge's results? Did you only re-ran them with the new version of ATOM? Can you show the residuals? What about the ground truth evaluations?
Yes, I re-ran them with the new version of ATOM, which I believe is what I was supposed to do, right?
I've also re-ran the ground truth evaluation and included it now, but no improvements were made in that regard, I don't think.
But I think @miguelriemoliveira is right and we should probably have a quick meeting to discuss this. I'll send an e-mail to schedule it.
Hi @Kazadhum ,
from my side tomorrow after 16h is ok.
Hello @miguelriemoliveira and @manuelgitgomes! After running into 30 different problems with the zau2
system and my poorly configured calibration, I think we have a working system!
I just recorded a small bagfile, in which all of the following are moved:
- the UR arm;
- the moving base;
- the calibration pattern.
This bag file resulted in a dataset containing 14 collections. The first 5 of these were causing some issues (probably causing convergence on a local minimum ?), but running the calibration on collections 5-13 results in the following:
rosrun atom_calibration calibrate -json dataset1/dataset.json -v -csf 'lambda x: int(x)>=5'
+------------+----------------+------------------+
| Collection | camera_mb [px] | hand_camera [px] |
+------------+----------------+------------------+
| 005 | 0.2195 | 0.1766 |
| 006 | 0.2296 | 0.1959 |
| 007 | 0.2319 | 0.1816 |
| 008 | 0.3233 | 0.1642 |
| 009 | 0.3278 | 0.1642 |
| 010 | 0.3034 | 0.1987 |
| 011 | 0.2286 | 0.1652 |
| 012 | 0.2308 | 0.1817 |
| 013 | 0.2624 | 0.1979 |
| Averages | 0.2619 | 0.1807 |
+------------+----------------+------------------+
I'll try running the evaluation procedure now.
Running a sanity check (both test and train datasets are the same) ...
rosrun atom_evaluation rgb_to_rgb_evaluation -train_json dataset1/atom_calibration.json -test_json dataset1/atom_calibration.json -ss camera_mb -st hand_camera -sfr
+--------------+-----------+-------------+-------------+------------+-----------+
| Collection # | RMS (pix) | X err (pix) | Y err (pix) | Trans (mm) | Rot (deg) |
+--------------+-----------+-------------+-------------+------------+-----------+
| 005 | 0.2677 | 0.1770 | 0.0990 | 3.6837 | 0.3008 |
| 006 | 0.3198 | 0.2204 | 0.1232 | 1.8836 | 0.2434 |
| 007 | 0.2969 | 0.1728 | 0.1703 | 0.5894 | 0.2325 |
| 008 | 0.2958 | 0.1084 | 0.2110 | 0.1949 | 0.1449 |
| 009 | 0.3154 | 0.1197 | 0.2178 | 0.4674 | 0.1778 |
| 010 | 0.2678 | 0.1310 | 0.1622 | 0.2917 | 0.1826 |
| 011 | 0.2739 | 0.1613 | 0.1503 | 0.9112 | 0.0658 |
| 012 | 0.2514 | 0.1514 | 0.1187 | 0.6424 | 0.0612 |
| 013 | 0.3163 | 0.1953 | 0.1859 | 0.6376 | 0.2350 |
| Averages | 0.2894 | 0.1597 | 0.1598 | 1.0335 | 0.1827 |
+--------------+-----------+-------------+-------------+------------+-----------+
I'm going to record a second dataset from a different bag now to use as a test dataset
I just ran the RGB to RGB intra-collection evaluation after recording a test dataset. Here's the results:
rosrun atom_evaluation rgb_to_rgb_evaluation -train_json dataset1/atom_calibration.json -test_json dataset2/dataset.json -ss camera_mb -st hand_camera
@manuelgitgomes or @miguelriemoliveira please confirm that the test dataset is supposed to be un-calibrated to make sure I did it correctly :sweat_smile:
Errors per collection
+--------------+-----------+-------------+-------------+------------+-----------+
| Collection # | RMS (pix) | X err (pix) | Y err (pix) | Trans (mm) | Rot (deg) |
+--------------+-----------+-------------+-------------+------------+-----------+
| 000 | 0.7953 | 0.6398 | 0.3346 | 4.9284 | 0.3625 |
| 001 | 0.6268 | 0.4558 | 0.3170 | 1.7529 | 0.2633 |
| 002 | 0.7121 | 0.5008 | 0.3679 | 3.3556 | 0.5101 |
| 003 | 0.6701 | 0.4558 | 0.3312 | 5.2827 | 0.4285 |
| 004 | 0.7280 | 0.6166 | 0.2394 | 3.4674 | 0.3186 |
| 005 | 0.8179 | 0.6832 | 0.2664 | 2.7477 | 0.1613 |
| 006 | 0.8672 | 0.7471 | 0.2560 | 2.6111 | 0.3299 |
| Averages | 0.7453 | 0.5856 | 0.3018 | 3.4494 | 0.3392 |
+--------------+-----------+-------------+-------------+------------+-----------+
Running the ground truth evaluation:
rosrun atom_evaluation ground_truth_frame_evaluation -train_json dataset1/atom_calibration.json -test_json dataset2/dataset.json
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| Transformation # | Trans (m) | Rot (rad) | X (m) | Y (m) | Z (m) | Roll (rad) | Pitch (rad) | Yaw (rad) |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| base_link_mb-base_link | 0.0112 | 0.0208 | 0.0037 | 0.0048 | 0.0094 | 0.0192 | 0.0068 | 0.0042 |
| camera_mb_link-camera_mb_rgb_frame | 0.0247 | 0.0019 | 0.0006 | 0.0242 | 0.0046 | 0.0001 | 0.0014 | 0.0013 |
| hand_camera_link-hand_camera_rgb_frame | 0.0228 | 0.0011 | 0.0010 | 0.0226 | 0.0026 | 0.0006 | 0.0009 | 0.0001 |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
I think this looks really good! What do you think @miguelriemoliveira @manuelgitgomes?
Hi @Kazadhum ,
congrats on the work. As we discussed by phone just now, there is a problem here.
You mention that you changed the pattern's pose across the bagfile / dataset, but you used the fixed: True on the config.yml calibration configuration.
My suggestion is that the pattern remains fixed. That's how @manuelgitgomes and @JorgeFernandes-Git carried out their calibrations.
The thing that really puzzles me is how you are getting such great results with this incorrect calibration configuration ... @manuelgitgomes do you have any explanation for this?
Hello! I agree with @miguelriemoliveira that it should not have this good of a result.
Maybe it is because you have not induced any noise, so when it converged it was really close to the absolute minimum that it stayed there.
Try to calibrate with induced noise and see if anything changes
Thank you for the insight @manuelgitgomes !
I think that makes a lot of sense! I couldn't do it today but I'll try to re-run the calibration tomorrow evening without moving the pattern, since that's not supposed to happen. I'll run the calibration without noise first, but afterwards I'll run it with -nig 0.1 0.1
.
@Kazadhum if you can, try to run the calibration with this dataset (the one with the pattern moving), and add noise. Then please report the results!
@manuelgitgomes so yesterday I was recording some bagfiles and I wrote datasets over the ones I used before, so I don't have that dataset anymore :sweat_smile: I'll record a bag file to do it properly first (without moving the pattern) and then I'll do one where the pattern moves, just to see what happens with noise
Hi @miguelriemoliveira, @manuelgitgomes. I just recorded a bag file where the pattern doesn't move and ran the calibration. I'll post the results of the calibration here and edit this post as I go along.
Zau2 Calibration Results (pattern doesn't move)
Without noise:
Calibration:
rosrun atom_calibration calibrate -json $ATOM_DATASETS/zau2/dataset1/dataset.json -v
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+----------------+------------------+
| Collection | camera_mb [px] | hand_camera [px] |
+------------+----------------+------------------+
| 001 | 0.3883 | 0.2472 |
| 002 | 0.3772 | 0.2220 |
| 003 | 0.4690 | 0.2536 |
| 004 | 0.3116 | 0.2208 |
| 005 | 0.3550 | 0.2270 |
| 006 | 0.3916 | 0.2727 |
| Averages | 0.3821 | 0.2406 |
+------------+----------------+------------------+
RGB to RGB evaluation (intra-collection):
rosrun atom_evaluation rgb_to_rgb_evaluation -train_json $ATOM_DATASETS/zau2/dataset1/atom_calibration.json -test_json $ATOM_DATASETS/zau2/dataset2/dataset.json -ss camera_mb -st hand_camera
+--------------+-----------+-------------+-------------+------------+-----------+
| Collection # | RMS (pix) | X err (pix) | Y err (pix) | Trans (mm) | Rot (deg) |
+--------------+-----------+-------------+-------------+------------+-----------+
| 000 | 0.4979 | 0.2781 | 0.2968 | 0.7561 | 0.3294 |
| 001 | 0.4691 | 0.2719 | 0.2620 | 2.6751 | 0.4242 |
| 002 | 0.5510 | 0.3214 | 0.3413 | 1.9606 | 0.2110 |
| 003 | 0.5185 | 0.3001 | 0.2840 | 3.3848 | 0.2320 |
| 004 | 0.5382 | 0.2896 | 0.3142 | 1.0882 | 0.1117 |
| Averages | 0.5149 | 0.2922 | 0.2997 | 1.9730 | 0.2617 |
+--------------+-----------+-------------+-------------+------------+-----------+
Ground Truth Evaluation:
rosrun atom_evaluation ground_truth_frame_evaluation -train_json $ATOM_DATASETS/zau2/dataset1/atom_calibration.json -test_json $ATOM_DATASETS/zau2/dataset2/dataset.json
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| Transformation # | Trans (m) | Rot (rad) | X (m) | Y (m) | Z (m) | Roll (rad) | Pitch (rad) | Yaw (rad) |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| base_link_mb-base_link | 0.0133 | 0.0049 | 0.0003 | 0.0112 | 0.0070 | 0.0011 | 0.0030 | 0.0037 |
| camera_mb_link-camera_mb_rgb_frame | 0.0213 | 0.0031 | 0.0001 | 0.0210 | 0.0035 | 0.0003 | 0.0018 | 0.0025 |
| hand_camera_link-hand_camera_rgb_frame | 0.0210 | 0.0029 | 0.0019 | 0.0208 | 0.0017 | 0.0013 | 0.0005 | 0.0025 |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
With -nig 0.1 0.1
:
Calibration:
rosrun atom_calibration calibrate -json $ATOM_DATASETS/zau2/dataset1/dataset.json -v -nig 0.1 0.1
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+----------------+------------------+
| Collection | camera_mb [px] | hand_camera [px] |
+------------+----------------+------------------+
| 001 | 0.3914 | 0.2559 |
| 002 | 0.3775 | 0.2246 |
| 003 | 0.4805 | 0.2731 |
| 004 | 0.3181 | 0.2410 |
| 005 | 0.3610 | 0.2162 |
| 006 | 0.3987 | 0.3614 |
| Averages | 0.3879 | 0.2620 |
+------------+----------------+------------------+
RGB to RGB Evaluation (intra-collection):
rosrun atom_evaluation rgb_to_rgb_evaluation -train_json $ATOM_DATASETS/zau2/dataset1/atom_calibration.json -test_json $ATOM_DATASETS/zau2/dataset2/dataset.json -ss camera_mb -st hand_camera
+--------------+-----------+-------------+-------------+------------+-----------+
| Collection # | RMS (pix) | X err (pix) | Y err (pix) | Trans (mm) | Rot (deg) |
+--------------+-----------+-------------+-------------+------------+-----------+
| 000 | 0.5387 | 0.2740 | 0.3431 | 0.8896 | 0.3074 |
| 001 | 0.5154 | 0.2663 | 0.3360 | 2.2507 | 0.4068 |
| 002 | 0.6029 | 0.2996 | 0.4365 | 1.5705 | 0.1984 |
| 003 | 0.5712 | 0.2822 | 0.3725 | 3.0650 | 0.2223 |
| 004 | 0.6016 | 0.2926 | 0.4169 | 1.1278 | 0.1212 |
| Averages | 0.5660 | 0.2829 | 0.3810 | 1.7807 | 0.2512 |
+--------------+-----------+-------------+-------------+------------+-----------+
Ground Truth Evaluation:
rosrun atom_evaluation ground_truth_frame_evaluation -train_json $ATOM_DATASETS/zau2/dataset1/atom_calibration.json -test_json $ATOM_DATASETS/zau2/dataset2/dataset.json
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| Transformation # | Trans (m) | Rot (rad) | X (m) | Y (m) | Z (m) | Roll (rad) | Pitch (rad) | Yaw (rad) |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| base_link_mb-base_link | 0.0758 | 0.0271 | 0.0596 | 0.0345 | 0.0317 | 0.0258 | 0.0034 | 0.0078 |
| camera_mb_link-camera_mb_rgb_frame | 0.0494 | 0.0038 | 0.0001 | 0.0213 | 0.0445 | 0.0014 | 0.0012 | 0.0033 |
| hand_camera_link-hand_camera_rgb_frame | 0.0447 | 0.0045 | 0.0151 | 0.0187 | 0.0377 | 0.0029 | 0.0023 | 0.0026 |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
What do you think of these results?
Hello @Kazadhum, these results seems optimal!
Hello @Kazadhum, these results seems optimal!
Hi @manuelgitgomes! Great! Should I record a bigger bagfile and and collect a larger dataset? If so, how many collections (more or less) would be enough, you think?
Sorry @Kazadhum, this notification passed me by,
Usually I do it until I get tired :-) But honestly, around 30 should be enough. If you can endure it, go up to 40 or 50.
Hi @Kazadhum ,
https://lardemua.github.io/atom_documentation/procedures/#collect-data
Thank you, @miguelriemoliveira and @manuelgitgomes! I just recorded a larger bagfile and produced a dataset from it!!
I'll post the calibration results here.
Calibration
rosrun atom_calibration calibrate -json $ATOM_DATASETS/zau2/large_dataset1/dataset.json -v -nig 0.1 0.1
Errors per collection (anchored sensor, max error per sensor, not detected as "---")
+------------+----------------+------------------+
| Collection | camera_mb [px] | hand_camera [px] |
+------------+----------------+------------------+
| 000 | 0.4806 | 0.3392 |
| 001 | 0.4790 | 0.3292 |
| 002 | 0.4643 | 0.4047 |
| 003 | 0.5694 | 0.2985 |
| 004 | 0.5739 | 0.3579 |
| 005 | 0.4592 | 0.3985 |
| 006 | 0.4488 | 0.3578 |
| 008 | 0.4360 | 0.4766 |
| 009 | 0.3751 | 0.4314 |
| 010 | 0.3770 | 0.4143 |
| 011 | 0.3641 | 0.4801 |
| 012 | 0.3767 | 0.4311 |
| 014 | 0.3969 | 0.5643 |
| 017 | 0.5302 | 0.5065 |
| 018 | 0.5066 | 0.4042 |
| 019 | 0.5195 | 0.3859 |
| 020 | 0.5811 | 0.4097 |
| 022 | 0.5254 | 0.5252 |
| 023 | 0.5119 | 0.5958 |
| 025 | 0.5512 | 0.4463 |
| 026 | 0.3851 | 0.3237 |
| 028 | 0.6094 | 0.4233 |
| 029 | 0.5984 | 0.4631 |
| 030 | 0.5521 | 0.4684 |
| 031 | 0.5459 | 0.5047 |
| 032 | 0.5651 | 0.4947 |
| 033 | 0.5497 | 0.4017 |
| 034 | 0.5284 | 0.5005 |
| 035 | 0.4557 | 0.4318 |
| 036 | 0.4802 | 0.4409 |
| Averages | 0.4932 | 0.4337 |
+------------+----------------+------------------+
RGB to RGB Evaluation
rosrun atom_evaluation rgb_to_rgb_evaluation -train_json $ATOM_DATASETS/zau2/large_dataset1/atom_calibration.json -test_json $ATOM_DATASETS/zau2/test_dataset/dataset.json -ss camera_mb -st hand_camera -csf 'lambda x: int(x) not in [20,21]'
Note: Ignored collections 20 and 21 in the test dataset because I suspect they were poorly collected.
Errors per collection
+--------------+-----------+-------------+-------------+------------+-----------+
| Collection # | RMS (pix) | X err (pix) | Y err (pix) | Trans (mm) | Rot (deg) |
+--------------+-----------+-------------+-------------+------------+-----------+
| 000 | 0.8636 | 0.3304 | 0.6735 | 2.9573 | 0.0600 |
| 001 | 0.9784 | 0.2442 | 0.8372 | 3.5933 | 0.4557 |
| 002 | 0.7753 | 0.2313 | 0.6060 | 2.7694 | 0.0608 |
| 003 | 0.7767 | 0.2490 | 0.6058 | 2.4910 | 0.2158 |
| 004 | 0.8719 | 0.2943 | 0.6728 | 3.4241 | 0.1741 |
| 005 | 0.8049 | 0.2714 | 0.6738 | 3.9115 | 0.2305 |
| 006 | 0.7781 | 0.2151 | 0.6708 | 2.1987 | 0.3803 |
| 007 | 0.7415 | 0.1861 | 0.5837 | 2.0838 | 0.2834 |
| 008 | 0.7112 | 0.2028 | 0.5842 | 1.9266 | 0.5536 |
| 009 | 0.8035 | 0.2320 | 0.6432 | 2.1992 | 0.2466 |
| 010 | 0.7160 | 0.1637 | 0.5710 | 1.9474 | 0.3472 |
| 011 | 0.8222 | 0.2929 | 0.6582 | 4.0445 | 0.1465 |
| 012 | 0.7025 | 0.2270 | 0.5435 | 3.5765 | 0.5054 |
| 013 | 0.7128 | 0.2158 | 0.5694 | 2.5521 | 0.2836 |
| 014 | 0.8603 | 0.2260 | 0.7626 | 2.1438 | 0.2016 |
| 015 | 0.8177 | 0.2174 | 0.6555 | 4.1586 | 0.2802 |
| 016 | 0.8919 | 0.3236 | 0.6960 | 3.1240 | 0.1031 |
| 017 | 0.8132 | 0.2918 | 0.6770 | 2.7516 | 0.1819 |
| 018 | 0.9647 | 0.2735 | 0.8258 | 3.7152 | 0.3759 |
| 019 | 0.8474 | 0.2675 | 0.7018 | 2.0731 | 0.0851 |
| Averages | 0.8127 | 0.2478 | 0.6606 | 2.8821 | 0.2586 |
+--------------+-----------+-------------+-------------+------------+-----------+
Ground Truth Evaluation
rosrun atom_evaluation ground_truth_frame_evaluation -train_json $ATOM_DATASETS/zau2/dataset1/atom_calibration.json -test_json $ATOM_DATASETS/zau2/test_dataset/dataset.json
Errors per frame
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| Transformation # | Trans (m) | Rot (rad) | X (m) | Y (m) | Z (m) | Roll (rad) | Pitch (rad) | Yaw (rad) |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
| base_link_mb-base_link | 0.0758 | 0.0271 | 0.0596 | 0.0345 | 0.0317 | 0.0258 | 0.0034 | 0.0078 |
| camera_mb_link-camera_mb_rgb_frame | 0.0494 | 0.0038 | 0.0001 | 0.0213 | 0.0445 | 0.0014 | 0.0012 | 0.0033 |
| hand_camera_link-hand_camera_rgb_frame | 0.0420 | 0.0064 | 0.0172 | 0.0181 | 0.0334 | 0.0035 | 0.0043 | 0.0026 |
+----------------------------------------+-----------+-----------+--------+--------+--------+------------+-------------+-----------+
Hi @Kazadhum ,
the numbers look great! Congrats!
Hi @manuelgitgomes and @miguelriemoliveira ! I just updated this comment with the evaluation results. If I'm not mistaken this was all @manuelgitgomes needed, but correct me if I'm wrong. I'll e-mail you the .csv
files with the evaluation results in the meanwhile.
Hello @Kazadhum! Thank you very much.
How is the situation with the real bagfile?
Hi @manuelgitgomes!
Sorry, I had forgotten about that. I'll try to run calibrations on it today.