openpi
openpi copied to clipboard
Pi0 Inference in agilex cobot magic(mobile aloha)
Thank you so much for sharing the excellent codebase!
I’m currently working on a new robot, the Agilex Cobot Magic, which is an another version of the Mobile Aloha. I’ve written an inference code based on the Aloha_real code, but I’ve noticed a significantly lower success rate compared to the ACT model (the base model for Mobile Aloha) when performing simple pick tasks.
I’ve plotted the action rollout for both ACT and Pi0:
It seems that Pi0 behaves more hurry up than ACT, resulting in jerkier motions and a reduced success rate. Do you have any ideas on what might be causing this issue?
Hi @kcyoung98, based on the description you provided and the plots it is not immediately evident to me what the issue is. The fact that the action chunks look sped up does suggest it might be an issue with the main observation/action loop frequency or how you are executing the action chunk. A couple questions I have:
- Are you using this to control the base as well? I only see 14 dimensions and not the full 16 for the platform.
- Which norm stats asset are you using?
- Did you make any changes to code outside of the environment loop and device code?
Thank you so much for sharing the excellent codebase!
I’m currently working on a new robot, the Agilex Cobot Magic, which is an another version of the Mobile Aloha. I’ve written an inference code based on the Aloha_real code, but I’ve noticed a significantly lower success rate compared to the ACT model (the base model for Mobile Aloha) when performing simple pick tasks.
I’ve plotted the action rollout for both ACT and Pi0:
It seems that Pi0 behaves more hurry up than ACT, resulting in jerkier motions and a reduced success rate. Do you have any ideas on what might be causing this issue?
Can I get your email? I meet the same issue.
Hi, I am also working on the Agilex Cobot Magic, can I get your version of inference code?
+1
@Michael-Equi Thank you for replaying. Are you using this to control the base as well? I only see 14 dimensions and not the full 16 for the platform. I didn't control the base. Base is fixed. Which norm stats asset are you using? Originally, I used pizero base model but after realizing agilex mobile aloha is different from trossent aloha, I used my own state from agilex mobile aloha. Did you make any changes to code outside of the environment loop and device code? I change the code of aloha_policy so that disable # Flip the joints. state = _joint_flip_mask() * state. and change _gripper_to_angular so that only gripper is normalized between 0 and 0.07 which is agilex piper arm. My problem is that joint position seems wrong and pizero makes over-shot than ACT model.
I’ve also developed an inference pipeline for the Agilex Cobot Magic robot that supports running policies remotely from a server, and I’m more than happy to share it—feel free to reach out to me.
However, during data collection, we noticed that the older version of the Agilex Cobot Magic uses an Orbbec camera, which seems unable to meet the required frame rate of 50 FPS. It appears the maximum achievable frame rate is only around 30 FPS.
I’ve also developed an inference pipeline for the Agilex Cobot Magic robot that supports running policies remotely from a server, and I’m more than happy to share it—feel free to reach out to me.
However, during data collection, we noticed that the older version of the Agilex Cobot Magic uses an Orbbec camera, which seems unable to meet the required frame rate of 50 FPS. It appears the maximum achievable frame rate is only around 30 FPS.
Hi, Have you change aloha_policy.py to adapt to Agilex Cobot Magic robot? I have notice some parameters for trossent. e.g. (arm_length, horn_radius),gripper normalize range,_joint_flip_mask.
I’ve also developed an inference pipeline for the Agilex Cobot Magic robot that supports running policies remotely from a server, and I’m more than happy to share it—feel free to reach out to me. However, during data collection, we noticed that the older version of the Agilex Cobot Magic uses an Orbbec camera, which seems unable to meet the required frame rate of 50 FPS. It appears the maximum achievable frame rate is only around 30 FPS.
Hi, Have you change aloha_policy.py to adapt to Agilex Cobot Magic robot? I have notice some parameters for trossent. e.g. (arm_length, horn_radius),gripper normalize range,_joint_flip_mask.
This seems to require querying the URDF model. Due to hardware constraints, we haven't proceeded with fine-tuning the model yet.
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work:
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work:
thanks for your reply.
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work: openpi_aloha_AgileX.zip
thanks for your reply.
Hi! Have you deployed the inference on cobot magic AgileX? I modified the aloha_policy.py with some constants of Piper. But the grippers keep closed during inference.
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work: openpi_aloha_AgileX.zip
thanks for your reply.
Hi! Have you deployed the inference on cobot magic AgileX? I modified the aloha_policy.py with some constants of Piper. But the grippers keep closed during inference.
You only need to modify the code in the aloha_real subfolder. Follow the instructions in the README.md to collect data and fine-tune the model — that should be enough to make it work.
https://github.com/user-attachments/assets/6b39aeec-254a-4f89-a1d9-71262fa7a9e1
I'm happy to share my results.
WeChat_20250801200431.mp4 I'm happy to share my results.
Thank you for your reply!!! The problem is caused by the normalization of gripper. Thank you for sharing your code. It works now!
https://github.com/user-attachments/assets/9b265b8a-455d-4929-aaa0-dad0915a5b89
Hi @HITSZ-Robotics, Thanks for sharing your videos. I'm trying to assess how difficult it is to set up pi0 on this robot compared to the Trossen mobile aloha. What were your major difficulties in data collection and pi0 fine-tuning?
Your tips would be greatly appreciated! Thanks again.
@derektan95
I believe that compared to the Trossen mobile version of Aloha, the control interface of the AgileX Cobot Magic's robotic arm is simpler—or rather, more low-level—as it only communicates via ROS topics. Most of my code modifications were made in robot_utils.py. The enable/disable functionality on the Trossen platform has already been well-encapsulated and can be invoked through dedicated function interfaces. As you know, the official repository only provides full deployment code for this specific robot model. During data collection and Pi0 fine-tuning, replacing the Trossen-specific control function interfaces with the ones for this robot can be a bit of a hassle (since fine-tuning is data-driven, and the data from both robots is the same, the fine-tuning process itself is no different).
Hope the above is helpful for your research!
Hi @EmbodiedMind,
Thanks for the tips, seems like you are very experienced with both robots. May I know if you have personally tried the Trossen Mobile Aloha robot as well?
From your experience, how many hours of demonstration data was required for you to successfully finetune pi0 on the robot (e.g. the cloth folding task you showed in the video above)? Is it normal for the arm to be oscillating so much when folding the cloth?
Hi @EmbodiedMind,
Thanks for the tips, seems like you are very experienced with both robots. May I know if you have personally tried the Trossen Mobile Aloha robot as well?
From your experience, how many hours of demonstration data was required for you to successfully finetune pi0 on the robot (e.g. the cloth folding task you showed in the video above)? Is it normal for the arm to be oscillating so much when folding the cloth?
I’ve only used the Cobot Magic robot in our lab. As for the Trossen Mobile Aloha, my understanding of its operation details comes from analyzing the code rather than hands-on experience.
For the pi0 fine-tuning task, I used 63 demonstrations, each with 500 frames (captured at 19 FPS due to hardware constraints). The oscillation during the cloth-folding task was mainly caused by the robot's mechanical structure not being firmly fixed and the joint command data not being very smooth.
Thanks for the tips! I will be sure to give this a shot and update you.
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work:
Hi~ Could you please share the Agilex Cobot Magic USD file with me? Thank you very much!
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work: openpi_aloha_AgileX.zip
thanks for your reply.
Hi! Have you deployed the inference on cobot magic AgileX? I modified the aloha_policy.py with some constants of Piper. But the grippers keep closed during inference.
You only need to modify the code in the
aloha_realsubfolder. Follow the instructions in theREADME.mdto collect data and fine-tune the model — that should be enough to make it work.
Hi, did you use "start_ms_piper.launch" file to start robot arm instead of ros_nodes.launch?
WeChat_20250801200431.mp4 I'm happy to share my results.
Thank you for your reply!!! The problem is caused by the normalization of gripper. Thank you for sharing your code. It works now!
733c6eab60eaba8b9e0b5e68031de417.mp4
Hi~~ there, We encountered the same issue as well. Could you please point out specifically how you resolved it?
WeChat_20250801200431.mp4 I'm happy to share my results.
Thank you for your reply!!! The problem is caused by the normalization of gripper. Thank you for sharing your code. It works now! 733c6eab60eaba8b9e0b5e68031de417.mp4
Hi~~ there, We encountered the same issue as well. Could you please point out specifically how you resolved it?
Sorry for reply so late, I just saw the email. I annotated the normalization of the states or actions of the grippers, then it worked. 1.qpos[6] = constants.PUPPET_GRIPPER_POSITION_NORMALIZE_FN(qpos[6]) qpos[13] = constants.PUPPET_GRIPPER_POSITION_NORMALIZE_FN(qpos[13]) qvel[6] = constants.PUPPET_GRIPPER_POSITION_NORMALIZE_FN(qvel[6]) qvel[13] = constants.PUPPET_GRIPPER_POSITION_NORMALIZE_FN(qvel[13])
2.left_arm_target[-1] = constants.PUPPET_GRIPPER_JOINT_UNNORMALIZE_FN(left_arm_target[-1]) right_arm_target[-1] = constants.PUPPET_GRIPPER_JOINT_UNNORMALIZE_FN(right_arm_target[-1])
3.action[6] = constants.MASTER_GRIPPER_JOINT_NORMALIZE_FN(master_bot_left.dxl.joint_states.position[6]) action[7 + 6] = constants.MASTER_GRIPPER_JOINT_NORMALIZE_FN(master_bot_right.dxl.joint_states.position[6])
Maybe annotate the lines above or one of them. Sorry it's been so long that I can't quite recall. I'm not beside the robot. You can try annotating the lines. If it dosen't work, I'm pleasure to find the code on our robot.
Here is the modified version of the code adapted for the Agilex Cobot Magic. I hope it’s helpful for your work:
Hello, thank you very much for the files you provided. I have some questions I'd like to ask you. Could I have your email address or other contact information?
这是针对 Agilex Cobot Magic 修改后的代码版本。希望对您的工作有所帮助: openpi_aloha_AgileX.zip
您好,非常感谢您提供的文件。我有一些问题想请教您。能否提供您的邮箱地址或其他联系方式?
my email:[email protected]