motoman
motoman copied to clipboard
point-streaming: further development based on PR #88
I tried to address some of the comments made by @shaun-edwards in PR #88 (created by @jim-rees):
-
Comment 1: Made
JointTrajectoryInterface::jointCommandCB()
pure virtual so thatJointTrajectoryInterface::jointCommandCB()
implementation can be deleted -
Comment 2: Message was changed to
ROS_DEBUG
- Comment 3: Not sure what the point of that sleep call is here so not sure what the constant would be called. No change made.
-
Comment 4: It is the same reconnect logic as in
case TransferStates::STREAMING
. I am assuming that it was added to be consistent withcase TransferStates::STREAMING
. Therefore, no change made. - Comment 5: see Comment 4.
As to why the state TransferStates::POINT_STREAMING
comes up multiple times; I have no idea. Was wondering the same thing. Obviously, there must be a good reason for having a class MotomanJointTrajectoryStreamer
that extends JointTrajectoryStreamer
but I don't really understand it at this point.
I have tested the code on an sia5 manipulator and found that commanded constant-velocity motions are carried out very smoothly. I also teleoperated the manipulator using a 6 DOF haptic interface with this point-streaming implementation and also experienced very smooth end-effector motion.
Thanks @tdl-tbslab for getting this up-to-date again. I'll see if I can test this out sometime soon.
@ted-miller: are there any things in here that catch your eye?
@alemme, @andreaskoepf any comments? This sounds similar to what you described on ROS Discourse, sans the trapezoidal velocity generator (and possibly some other things you didn't mention).
@gavanderhoorn The approach of this PR and other point-streaming impls seems to be a dynamic online trajectory generation. The main difference to our TVP controller which runs inside the realtime OS of the controller is that we operate setpoint based and the setpoint can be changed in every cycle .. e.g. it does not continue the move to a previous target but immediately reacts upon the next target, which allows us especially to minimize latency, see my joystick video.
This has been suggested by multiple people. I'm inclined to accept it as is. Any thoughts @gavanderhoorn, @ted-miller, @alemme, @andreaskoepf?
I'll wait a couple of days, otherwise, I'll merge.
I've added a few more comments (apologies, I should'v done a review instead).
Overall question: does this support multi-group systems?
Thanks for all the revisions @gavanderhoorn. I will revise the code accordingly and add a commit.
As for multi-group support; I don't know. Am not too familiar with how the driver handles multiple manipulators. But @jim-rees comments the following in PR #88: "I've had a report that this does not work with dual arm robots, because they require motoman_msgs/DynamicJointTrajectory messages which this doesn't implement."
Thanks for iterating on this @tdl-tbslab. Much appreciated.
re: multi-group: yes, the DynamicJointPoint would be needed. It should not be too hard to add, but we might not want to complicate things in this PR.
I am assuming that multi-group refers to multi-arm setups such as the SDA5. Unfortunately, we do not have such a dual arm setup in our lab, so I would probably not be able to implement that feature.
I am assuming that multi-group refers to multi-arm setups such as the SDA5.
not necessarily. Work cells with a single arm on a linear track, or an arm with a 2-axis positioner can also be setup as multi-group systems.
The SDA is just a clever way to use this to create a 'humanoid-like' torso + arms setup.
But as I wrote above: let's not complicate this PR.
@gavanderhoorn, is this good to go?
ping @gavanderhoorn
All, I am willing to update this PR with all the latest kinetic-devel commits and test on our GP7. I did have a few additions that I think we should add (see comments in PR commits).
Thoughts on trying to get this tested, updated with latest kinetic_devel
and the least confusing way to update this PR?
Hi @dambrosio, thanks for the reviews. I'll have a look at them later tonight and might have time to take care of this on Wednesday including the kinetic-devel
update and incorporating the suggested modifications. I'll then add a commit. I think that should be the least messy way of adding your modifications to this PR.
Hi @dambrosio, thanks for the reviews. I'll have a look at them later tonight and might have time to take care of this on Wednesday including the
kinetic-devel
update and incorporating the suggested modifications. I'll then add a commit. I think that should be the least messy way of adding your modifications to this PR.
Sounds good. I will be testing this on a GP7 this afternoon and will let you know how things go.
Hi @dambrosio, thanks for the reviews. I'll have a look at them later tonight and might have time to take care of this on Wednesday including the
kinetic-devel
update and incorporating the suggested modifications. I'll then add a commit. I think that should be the least messy way of adding your modifications to this PR.Sounds good. I will be testing this on a GP7 this afternoon and will let you know how things go.
Quick update: I tried these changes yesterday and ran into issues with MotoRos reply messages in response to sending a single trajectory point. Here is the ROS console output
[ WARN] [1549936100.819839659]: Motoman robot is now enabled and will accept motion commands.
[ INFO] [1549936105.338233468]: First joint point received. Starting on-the-fly streaming.
[ WARN] [1549936105.410549892]: Reply String: Invalid message, Sub string: Trajectory start position doesn't match current robot position
[ERROR] [1549936105.410620147]: Aborting point stream operation. Failed to send point (#0): Invalid message (3) : Trajectory start position doesn't match current robot position (3011)
The trajectory_msgs/JointTrajectory
message contains a single JointTrajectoryPoint
that has different joint positions from those being reported on the joint_states
topic.
I was thinking of updating the POINT_STREAMING mode to process two trajectory points in the jointCommandCB()
, where the first is the current robot joint positions and the second is the desired joint positions. I realized that these suggested updates would almost replicate the functionality of what is provided in the STREAMING mode?
Update: Seems like the original PR was tested using this node, I'll try to run with it and see if I get similar errors to what I described above.
Update 2/14/19: Yesterday I managed to get point streaming to work on our GP7 manipulator. The jogger node used two very important (yet masked) steps in getting the streaming points successfully propagated through MotoROS.
- The first trajectory point (let's call this the start point) must contain the current joint positions with all zero velocity values
- Each following points must contain non-zero velocity values for each position and the
time_from_start
value must be greater than the previous point.
Item 2 is extremely important. I believe the start point should have a time_from_start
equal to zero and each point should increase from there. For example: start pt - 0.0 [s], pt 1 - 0.02 [s], pt2 - 0.04 [s], etc. The time expressed in the example is just for reference and not actual values.
After minimal testing it was found that point streaming can successfully operate around 50 Hz max. If the user is not careful and sends joint_command
messages faster than this the internal point streaming queue can grow without bounds. It is probably a good idea to check for a maximum queue size and either drop messages that would cause the queue to grow past this maximum or to error and transition back to IDLE.
@tdl-ua I have added checks to help the user follow these requirements when using point streaming mode, I will push to my forked branch and give you a link. If you would like to use any changes I have made please feel free.
All, I am willing to update this PR with all the latest kinetic-devel commits and test on our GP7. I did have a few additions that I think we should add (see comments in PR commits).
Thoughts on trying to get this tested, updated with latest
kinetic_devel
and the least confusing way to update this PR?
@dambrosio I've added you as a collaborator to the tbs-ualberta fork now so that you can push commits to the PR directly as I currently don't have time to work on this. Feel free to merge point-streaming with kinetic-devel
and add your changes. Once I have the chance, I'll also test the updated PR on our SIA5.
All, I am willing to update this PR with all the latest kinetic-devel commits and test on our GP7. I did have a few additions that I think we should add (see comments in PR commits). Thoughts on trying to get this tested, updated with latest
kinetic_devel
and the least confusing way to update this PR?@dambrosio I've added you as a collaborator to the tbs-ualberta fork now so that you can push commits to the PR directly as I currently don't have time to work on this. Feel free to merge point-streaming with
kinetic-devel
and add your changes. Once I have the chance, I'll also test the updated PR on our SIA5.
I'll be able to take a look at all of this next week. I will merge and add any of my changes for review. If I have time I will try and provide a mechanism you can use to do tool tip velocity control using the new POINT_STREAM state.
All, I am willing to update this PR with all the latest kinetic-devel commits and test on our GP7. I did have a few additions that I think we should add (see comments in PR commits). Thoughts on trying to get this tested, updated with latest
kinetic_devel
and the least confusing way to update this PR?@dambrosio I've added you as a collaborator to the tbs-ualberta fork now so that you can push commits to the PR directly as I currently don't have time to work on this. Feel free to merge point-streaming with
kinetic-devel
and add your changes. Once I have the chance, I'll also test the updated PR on our SIA5.I'll be able to take a look at all of this next week. I will merge and add any of my changes for review. If I have time I will try and provide a mechanism you can use to do tool tip velocity control using the new POINT_STREAM state.
I've developed something in the past that does essentially velocity control in cartesian space (basically end-effector teleoperation): https://github.com/tbs-ualberta/manipulator_pose_following where I think I've addressed some of the issues you mention above.
The package is a work in progress as obtaining the IK still needs to be improved and the code needs to be made more manipulator-agnostic but it is working.
@tdl-ua quick FYI, this branch should be good for testing.
@tdl-ua quick FYI, this branch should be good for testing.
@dambrosio I'll see if I can test it tonight.
@dambrosio I've finally had a chance to test your commits on our SIA5. I tried to jog the EEF in cartesian space and did not run into any issues.
I saw that you added checks on streaming queue bounds and time_from_start
increase. I would agree that those checks are important to help the user follow the point streaming requirements. We can also calculate the frequency of incoming points from the difference between the time_from_start
of two subsequent points and stop streaming with an error message if the frequency exceeds 50 Hz. The feedback to the user would then be much more informative.
Please see my comments about latency and jitter at #252 .
Hi, Following #555 - I want to use this PR as @gavanderhoorn suggested it might work.
I have an HC10DTP + YRC1000micro controller. We're currently on ROS noetic (Ubuntu 20.04).
Will this work? What adjustments do I need to make in order to enable point streaming with my controller?