teach-repeat
teach-repeat copied to clipboard
Inquiry About Specific Implementation Details in Your GitHub Project
Dear Dominic Dall'Osto,
I hope this message finds you well. I recently had the opportunity to successfully simulate your project using the Husky simulator adapted for Ubuntu 20.04, and I am truly impressed with the outcomes. However, I have encountered a few issues that I hope you could help clarify.
Issue with data_matching_jackal.launch: I noticed that line 59 in the data_matching_jackal.launch file generates an error during execution. To overcome this, I commented out the line, which allowed the simulation to run repeatedly without issues. Could you please explain the purpose of this particular line of code?
Analysis of Results: After completing the full teach and repeat process, I observed that each phase (teach and repeat) generates a folder containing images and pose information. I am interested in analyzing the performance of my simulation results against the actual outcomes. Could you guide me on how to proceed with this analysis?
Units in the correction Folder: What are the units of angle and path deviation stored in the correction folder?
Details and Units in the offset Folder: In the offset folder, could you clarify whether the terms 'position' and 'orientation' represent the location and facing direction of the autonomous vehicle's center for each frame? If so, what are their respective units?
Thank you for your time and assistance. I look forward to your guidance and continuing to learn from your excellent work.
Best regards,
Jeff
Hi Jeff,
Thanks for getting in touch! I'll try and answer all your questions:
Issue with data_matching_jackal.launch: I noticed that line 59 in the data_matching_jackal.launch file generates an error during execution. To overcome this, I commented out the line, which allowed the simulation to run repeatedly without issues. Could you please explain the purpose of this particular line of code?
This is just used if the robot's controller requires PoseStamped messages instead of the custom Goal message that are used here. If you're able to implement the controller to accept the Goal message type you don't need this.
Analysis of Results: After completing the full teach and repeat process, I observed that each phase (teach and repeat) generates a folder containing images and pose information. I am interested in analyzing the performance of my simulation results against the actual outcomes. Could you guide me on how to proceed with this analysis?
Because you're running this in simulation you have access to the ground truth position of the robot, which can allow you to assess how closely the robot's repeat path matched the teach path. Make sure save_gt_data
is set to true in the launch file, and make sure your simulation is properly publishing tf information for your robot, and the following code should run.
https://github.com/QVPR/teach-repeat/blob/a0d54ea58f6410d66d49d76db9d7ae08dc06d8d7/nodes/localiser.py#L316C1-L324C9
You can then compare these ground truth positions during the teach and repeat runs.
Another approach is to see whether (and how reliably) the robot can successfully repeat the taught path. You can see further performance analysis approaches in our paper.
Units in the correction Folder: What are the units of angle and path deviation stored in the correction folder?
Theta offset is in radians - ie. the total rotation correction the last goal waypoint experienced
Path offset is the factor the distance between the previous two goals was multiplied by while performing corrections - ie. 1.0 means no correction, <1 means the distance was reduced by the correction, >1 means the distance was increased.
Details and Units in the offset Folder: In the offset folder, could you clarify whether the terms 'position' and 'orientation' represent the location and facing direction of the autonomous vehicle's center for each frame? If so, what are their respective units?
The offset folder stores the difference in position and orientation between the goal waypoint's pose during the teach run, and during the repeat run after all the corrections were applied. Basically, this is a measure of how unreliable the robot's odometry information is during the repeat run.
If you want the pose of the robot, look in the pose folder. Note this is the robot's estimated own pose based on its odometry and vision corrections, so might contain some errors.
The ground truth pose of the robot is stored in the _map_to_base_link.txt
text files if you have ground truth information available.
Position is stored in metres. I think orientation is stored as a quaternion, but if that doesn't makes sense send me a sample file and I can check.
Dear Dominic-DallOsto,
I am writing to express my sincere gratitude for the valuable advice you provided on GitHub. Your guidance has been incredibly helpful, and I was able to successfully resolve the issue I was facing with. Thanks to your detailed explanations and expertise.
I truly appreciate the time you took to assist me. Your support and willingness to share your knowledge have made a significant difference in the progress of my project.
Thank you once again for your help. I look forward to the possibility of collaborating or discussing related topics in the future.
Best regards,
Jeff