CrowdNav icon indicating copy to clipboard operation
CrowdNav copied to clipboard

some questions about the experiment in real world

Open EcustBoy opened this issue 5 years ago • 3 comments

Hi~ I've read your paper in detail , and I would like to ask that how you used the onboard sensors to perceive the position and speed state of the pedestrians(i.e obstacles ) around segway robot in the real world experiment. I plan to do similar experiments but I'm not sure how to design perception module to get pedestrians' state in real world.

EcustBoy avatar Oct 22 '20 14:10 EcustBoy

Hi there, in the paper, we assumed the crowd navigation is in an open space, i.e., there is not obstacle around pedestrains. This setup makes tracking humans much easier. We projected the depth image onto the ground and clustered projected points into clusters. The center of each cluster is a human and the speed of humans are calculated by subtracting humans' current positions from their previous positions.

This is a really simplified way of calculating human position and speed. If you have specialized sensor like Lidar or use more advanced detection technique, I believe the result should be more accurate.

ChanganVR avatar Oct 24 '20 03:10 ChanganVR

@ChanganVR Hi, changan~ Thanks for your patient answer! Actually I still have several doubts:

(1) the segway robot also has motion, will this ego-motion have a distortion effect on the state estimation of the pedestrains? P.S. according to your answer and the explanation in your paper, I guess what you estimate is the position and speed of the pedestrians relative to the ego-robot's local coordinate system at each moment ,and at the same time, the robot need to positioning itself in the global coordinate frame to get the distance relative to terminal point, is this guess correct?:-)

(2) How do you position the segway robot in the global coordinate frame? Is it through SLAM module? And do you need to manually specify the end point coordinates for the robot in your real world experiment?

Actually, I plan to run my turtlebot2 robot in the complex pedestrian environment, such as library corridor, i.e. there are both static obstacles and various dynamic pedestrains , I think I can try to get their position and velocity state by lidar points semantic segmentation(which can helps to distinguish static obstacles from people body), maybe the result can be more accurate and roboust. In short, your group's research gives me interesting inspiration about human-robot interction modeling, thanks for your paper and open source code~~ :-)

EcustBoy avatar Oct 24 '20 06:10 EcustBoy

Hi @EcustBoy,

I'm really sorry that I missed your reply earlier.

This project was done quite a while ago and some of the details became blurry to me. I just checked the code and the inputs to the robots should be the global coordinate frame and the policy transforms the inputs to be relative its current pose. So yes, the human positions should be calculated in a global coordinate frame.

I think we didn't need to handle localization by ourselves since there was some module on Segway robot that has implemented that function and we basically just needed to call the API to get its current pose. And yes, the end point in my real world experiment was alway 10 meters in front the robot w.r.t its starting pose.

I haven't dealt with scenarios with both static obstacles and dynamic pedestrians. I wish you a good luck with your proejct!

ChanganVR avatar Nov 04 '20 18:11 ChanganVR