idyntree icon indicating copy to clipboard operation
idyntree copied to clipboard

Inverse Kinematics solver for iCub visuomanip

Open fedeceola opened this issue 3 years ago • 15 comments
trafficstars

Hi, I am trying to configure the IK solver for controlling the iCub right hand using this urdf model and the python bindings of iDynTree installed via pip (version 5.0.0). I would like to ask whether anyone has already configured it, since I need some suggestions on how to set some parameters. Specifically:

  • I tried to set the resolution mode both as InverseKinematicsTreatTargetAsConstraintFull and InverseKinematicsTreatTargetAsConstraintNone, which one do you suggest to use for iCub visuomanip?
  • Which joints qposes do you suggest to set as initial condition for the problem? The current joints qposes or for instance some other joints configurations given by a different IK solver that may violate joints ranges?
  • Similarly, do you suggest to set as desired joints configuration the current qposes or some other joints configurations?

Thanks in advance!

fedeceola avatar Jul 08 '22 13:07 fedeceola

Hi @fedeceola, I do not have a lot of experience lately with full-body IK, but the answers to a lot of those questions depends a lot on which kind of IK problem are you tryng to solve, i.e. what are your targets, your constraints, which frame w.r.t. to which other frame you want to solve, etc etc. For example, are you using iCubGazeboV2_5_visuomanip because you want to the IK on some finger frame?

traversaro avatar Jul 08 '22 13:07 traversaro

and the python bindings of iDynTree installed via pip (version 5.0.0).

Small curiosity, why are you using 5.0.0 instead of the latest release 5.2.1 ?

traversaro avatar Jul 08 '22 13:07 traversaro

Hi @traversaro, thanks for you response! The problem that I am trying to solve is to find a suitable joints configuration to reach a target pose for the r_hand_dh_frame link with respect to the root_link, which is kept fixed. The joints that I am controlling are the seven joints of the iCub right arm and the torso yaw, without any other constraint. I am using the iCubGazeboV2_5_visuomanip model because I also need to control iCub fingers, but this is not related to the IK problem.

I am using version 5.0.0 instead of 5.2.1 because I installed a few time ago, I can update it!

fedeceola avatar Jul 08 '22 14:07 fedeceola

Hi @fedeceola

Don't know if you have specific needs about which tool to use and with which programming language, but this problem is pretty much solved in https://github.com/robotology/icub-gazebo-grasping-sandbox.

pattacini avatar Jul 08 '22 14:07 pattacini

Hi @traversaro, thanks for you response! The problem that I am trying to solve is to find a suitable joints configuration to reach a target pose for the r_hand_dh_frame link with respect to the root_link, which is kept fixed. The joints that I am controlling are the seven joints of the iCub right arm and the torso yaw, without any other constraint. I am using the iCubGazeboV2_5_visuomanip model because I also need to control iCub fingers, but this is not related to the IK problem.

I am using version 5.0.0 instead of 5.2.1 because I installed a few time ago, I can update it!

Ack, thanks for the clarification. I guess you have constraints for having everything on Python. If not, the C++ library that @pattacini used in https://github.com/robotology/icub-gazebo-grasping-sandbox may be a good solution. Instead, for your problem here a few suggestions.

Probably you can start looking at https://github.com/robotology/idyntree/blob/master/examples/cxx/InverseKinematics/iDynTreeExampleInverseKinematics.cpp . It is a similar problem to yours (i.e. fixed-based IK of a chain, as opposed to more complex problems support by iDynTree::InverseKinematics in which you solve IK on a tree, floating base). The example is in C++, but it should be relatively doable to "convert" it to Python, the C++ and Python classes have the same names and methods. In particular, it is important to extract a 7-dof model that is relevant for your problem from the ~52 dof visuomanip model, to avoid useless computation being done during the IK. If instead you have doubts on how to use IK from Python, you can check examples from other repos of IK use from Python, such as:

  • https://github.com/robotology/gym-ignition/blob/2fdb0e7e457bc1fc0c1db332c09fd88f702a18af/python/gym_ignition/rbd/idyntree/inverse_kinematics_nlp.py#L120
  • https://github.com/robotology/blender-robotics-utils/blob/27e40eb47a9dd85de8e7c0e53e2a4484b1313130/script/blenderRCBPanel/common_functions.py#L134

I tried to set the resolution mode both as InverseKinematicsTreatTargetAsConstraintFull and InverseKinematicsTreatTargetAsConstraintNone, which one do you suggest to use for iCub visuomanip?

If you have a single task, I would go for InverseKinematicsTreatTargetAsConstraintNone. InverseKinematicsTreatTargetAsConstraintFull is tipically used when you have many tasks and you want some of them with really high priority, without resorting to choosing appropraite weights in the cost function.

Which joints qposes do you suggest to set as initial condition for the problem? The current joints qposes or for instance some other joints configurations given by a different IK solver that may violate joints ranges?

This really depends from what you have available. If you solving the IK online, and you already solved the IK at the previous instance, then probably the current position is good. I do not know how the optimizers behave if you initialize it at a position that violates the constraints, you may try this and see how it behaves.

Similarly, do you suggest to set as desired joints configuration the current qposes or some other joints configurations?

See previous question. However, in this specific case you have 7-dof and a 6-dof task, so most of the problem problem would be already solved by finding the appropriate solution that satisfies your 6-dof task, so probably any non-crazy configuration is ok.

traversaro avatar Jul 08 '22 15:07 traversaro

Hi @pattacini, thanks for your response! I see a couple of problems in using the grasping sandbox: I am using python and currently I am not using YARP. Would it be possible to somehow decouple the code in the grasping sandbox from YARP?

fedeceola avatar Jul 08 '22 15:07 fedeceola

@traversaro thanks so much for your suggestions! The examples that you suggested are the ones that I was using as a guide. In the meanwhile, I upgraded the version to the latest release and this seems to improve the performance, at least in the number of times in which it finds a solution. I am trying to set different weights/tolerances to get more precise results and evaluating how the initial position, if some joints limits are violated, affect the final result.

fedeceola avatar Jul 08 '22 16:07 fedeceola

The sandbox is just a composition of several tools and was designed as a walkthrough for users to get accustomed to how one can tackle a complex task by aggregating existing modules.

As @traversaro pointed out, the Cartesian Interface is a YARP interface that allows you to control the robot in the operational space, very much like a simpler motor interface, and it's based on a C++ library called iKin (belonging to the icub-main bundle).

The approach very much depends on your final goal:

  • If you need the robot (physical or simulated) in the loop, then you ought to bring in YARP anyway. Thus, you could reuse these tools and develop your high-level code independently, leaving the IK to the off-the-shelf API, so to say.
  • If you just need to do IK in a "robot-less" manner, maybe for training a NN, relying on iDynTree is definitely easier. However, bear in mind that:
    • To work with the "real" iCub kinematics, you have to account for several limitations enforced by the shoulder's cable length and the self-collisions due to the forearm's covers, which hinder the complete usage of the joints span. These things are already considered in iKin and the Cartesian I/F; don't know though whether the iDynTree objects above do account for them or they need to get properly configured/extended. Better off asking @traversaro.
    • You may expect different solutions from the two methods, as they may rely on different optimizers.

pattacini avatar Jul 08 '22 16:07 pattacini

@pattacini thanks for the clarification on the grasping-sandbox.

I need the robot, but I am not using gazebo as simulator, thus in my simulated environment I don't have YARP in the loop. When I'll work with the real iCub, I will use YARP and I plan to use the Cartesian Interface (unless the different solutions between the two IK solvers will be a problem).

fedeceola avatar Jul 08 '22 17:07 fedeceola

Hi @traversaro, I made some tests with the IK solver following your suggestions, but I'm still facing some problems. Specifically, the solutions that I get from the solver are not so precise, being the error w.r.t. the desired cartesian position in the order of some centimeters (in some cases 5/10 cm). I tried to edit some parameters (e.g. position and orientation weights when updating the target pose or the weight of the desired joints configuration) but the results doesn't seem to improve. Do you think that this error can be reduced?

I upload here a code snippet to reproduce the results that I get. It just requires the path of the iCubGazeboV2_5_visuomanip urdf model to be passed through the --urdf_model argument. idyntree_ik_snippet_reduced_model.zip

fedeceola avatar Jul 25 '22 13:07 fedeceola

Can you try to decrease the cost tolerance in:

        self.ik.setCostTolerance(1e-4)

?

traversaro avatar Jul 25 '22 13:07 traversaro

Yes, I tried to reduce the value up to 1e-10, having similar results.

fedeceola avatar Jul 25 '22 13:07 fedeceola

Hi @traversaro, I made some other tests. I report here some considerations:

  • Increasing the rotation weight improves the precision of the position in the solution proposed by the IK, while increasing the position weight almost doesn't affect the solution. As discussed with @xEnVrE and @S-Dafarra, this seems a strange behavior. However, I built iDynTree from source, modifying the code in src/inverse-kinematics/src/InverseKinematics.cpp, and position and rotation weights are received correctly.
  • While in general the solutions are better than the ones that I was mentioning in the previous comments, I still have some problems with some cartesian poses and I would like to manage these cases depending on the specific pose (e.g. requiring a more precise position or orientation).
  • I modified my script to set position and rotation targets separately (using the updatePositionTarget and updateRotationTarget functions instead of updateTarget), setting the same position and rotation weights as in the other script. However, the results that I get are different in the two cases.
  • I upload here the code snippets to reproduce results. idyntree_ik_snippet_reduced_model.zip

fedeceola avatar Jul 28 '22 09:07 fedeceola

After some debugging with @traversaro, we noticed that position and rotation weight are not correctly set in the cost function computation. I then found out that the problem is here: https://github.com/robotology/idyntree/blob/0dd5d7b931de86d6491bb4add009e66a4101a692/src/inverse-kinematics/src/TransformConstraint.cpp#L90 where m_posWeight should be replaced by m_rotWeight.

fedeceola avatar Aug 04 '22 07:08 fedeceola

Great catch! This is quite an important bug, let's give it the proper publicity:

the iDynTree::InverseKinematics class in iDynTree before release 6.0.1 was affected by a bug in the TransformConstraint::setRotationWeight method, in which the rotation weight was not set, and instead the position weight was assigned to the value intended for the rotation weight

fyi @robotology/iit-artificial-mechanical-intelligence

traversaro avatar Aug 04 '22 07:08 traversaro