lerobot icon indicating copy to clipboard operation
lerobot copied to clipboard

Hand-Eye Calibration for LeRobot

Open Akumar201 opened this issue 10 months ago • 7 comments

Hello, I am starting a project where I plan to use LeRobot for pick-and-place tasks utilizing classical robotics and vision techniques. I am wondering if anyone has experience with performing hand-eye calibration for this robot. My major concern is that the high-mounted camera is usually parallel to the arm, which may make it difficult for the camera to see the Aruco marker. Does anyone have any suggestions or insights on how to approach this? Thank you!

Akumar201 avatar Feb 12 '25 05:02 Akumar201

The LeRobot package implements deep-learning techniques to perform manipulation tasks. The typical structure is that it takes in RGB images and outputs robot actions (joint states) to perform manipulations. This prevents the need for hand-eye calibration which is typically used in classical robotics for motion planning.

An idea I can think of is to use a wide lens camera to increase FOV. Not sure if that's going to be really useful.

Can you share more details about your project objective?

vaishanth-rmrj avatar Feb 20 '25 23:02 vaishanth-rmrj

I am planning to use this robotic arm for different manipulation tasks such as picking up object , but for that I need to do hand-eye calibration. My robot will look something like this in configuration.

Image

Akumar201 avatar Feb 21 '25 00:02 Akumar201

If you are using purely DL approach you don't need hand-eye calibration. The policy learns to output robot actions based on the camera position. As far as I've experimented, for optimal policy performance, the camera pose must remain the same throughout dataset recording and evaluation. A significant change in cam pose drops the model's performance.

You can play around with collecting dataset with different camera pose and probably the policy might be robust.

Hope researchers come up with a model invariant to camera pos change!!!

vaishanth-rmrj avatar Feb 21 '25 00:02 vaishanth-rmrj

@vaishanth-rmrj Thank you for your response, my approach is not completely DL based currently, I will use complete DL based approach later. I am planning to use it as normal manipulator pick and place. Although, thank you for your insights. Regarding model invariant to camera pos changes can be done using integration of VLM and other techniques.

Akumar201 avatar Feb 21 '25 00:02 Akumar201

I don't see why this would be a issue: "My major concern is that the high-mounted camera is usually parallel to the arm, which may make it difficult for the camera to see the Aruco marker." Even if you can get 3, 4 images of ArUco in robot hand that should be sufficent to the hand-eye calibration using opencv. I recently did this for a different robot.

mohitydv09 avatar Mar 09 '25 04:03 mohitydv09

@mohitydv09 Thank you for your suggestion. I would like to know which hand eye api did you use. There is one easy hand eye and another is moveit that I am considering.

Akumar201 avatar Mar 09 '25 05:03 Akumar201

I used OpenCV for the calibration, this is the function documentation: https://docs.opencv.org/4.5.4/d9/d0c/group__calib3d.html#gaebfc1c9f7434196a374c382abf43439b

I didn't use ROS for my work as I use UR5e arms with universal robotics native RTDE pacakge to operate the robot. That being said the OpenCV function only required the robot poses and the camera images corresponding to those image to do the calibration.

mohitydv09 avatar Mar 09 '25 05:03 mohitydv09

Hi, I would like to ask if the robotic arm blocking the target object in the top camera view could affect the grasping performance. My robot is currently unable to grasp the object, and I am troubleshooting the issue (I am using the ACT policy).

Avory1258 avatar Jun 23 '25 02:06 Avory1258