second.pytorch icon indicating copy to clipboard operation
second.pytorch copied to clipboard

How to detect all points?

Open Oofs opened this issue 6 years ago • 15 comments

Hi, I do the detection on VLP-16 data, result is shown in RVIZ and it's pretty good. But i have noticed that only the points with x > 0 are detected. Do you know how to make the net detecting all points?

car_dect

Oofs avatar Oct 17 '18 08:10 Oofs

an approach that works with voxelnet and should work with SECOND :

as far as i understand the model is trained for the 80x70x4 area in front of the car and inference produces labels only for the objects visible in image space .

to get inference data for all points you could use your 360° data four times. rotate the data by 0°/90°/180°/270°. then use the pre-processing step that reduces these point clouds to the part that is visible in the image. (or just cut everything that is not within -45° to 45° of the front view center ) run inference on all 4 reduced point clouds, rotate the predictions back accordingly. together these predictions should cover all objects within the 360° point cloud.

maybe 2 parts (original and 180° rotated) are enough.

johleh avatar Oct 17 '18 15:10 johleh

@johleh thanks for your advice. That seems a reasonable way and i will try this. I found that simply modifying the following ranges to the desired value in .config file can make the net detecting bigger area: point_cloud_range anchor_ranges post_center_limit_range, but result is no quiet stable. car_detect

Oofs avatar Oct 18 '18 01:10 Oofs

@Oofs you are right. Note that if you use anchor_generator_stride (not recommend), you need to change offset when change detection range. In addition, the pretrained model encode the absolute location of voxels, so if you want to detect objects outside the camera range in KITTI, you may need to train a new model.

traveller59 avatar Oct 18 '18 05:10 traveller59

@traveller59 May I know why the range is [-3,1] for the z axis ([0,-40,-3, 70.4, 40, 1] in car.config)? According to the official documentation of Kitti, the z axis of the lidar coordinate is pointing upwards to the sky. Then why -3 is needed which seems evaluating the points under ground?

bigsheep2012 avatar Oct 19 '18 07:10 bigsheep2012

@bigsheep2012 The distribution of car bottom-center location is in [-3, 1]. you can draw the distributions.

traveller59 avatar Oct 23 '18 13:10 traveller59

velodyne sensor is located 1.73m above the ground (on top of the car). for detection of the points behind the car, maybe you can try to multiply the x coordinates by -1, do the detection, then transform the boxes back.

kwea123 avatar Nov 07 '18 14:11 kwea123

Hi @Oofs , it is great to see that you seem to also have tried to do some detections in ROS.

I ran SECOND as a ROS Node with one of the KITTI dataset and the results is at youtube link. This code is at repository link. I felt that the performance is not as expected and I might have done something wrong. Could you please help me check to see if you have any suggestions for improvement? Thank you very much.

Yuesong

cedricxie avatar Nov 15 '18 18:11 cedricxie

@Oofs Hi, how does the network perform on 16 lines data? Thank you.

kwea123 avatar Nov 16 '18 09:11 kwea123

I have changed point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1]. But still iam not able to detect objects for X<0. KIndly suggest if anybody found the solutions...

chowkamlee81 avatar Mar 05 '19 12:03 chowkamlee81

@kwea123 @traveller59 @cedricxie kindly help us on this issue.

chowkamlee81 avatar Mar 06 '19 11:03 chowkamlee81

@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.

So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]

KIndly help

chowkamlee81 avatar Mar 06 '19 11:03 chowkamlee81

Don't modify the config, put the range to [0,-40,-3, 70.4, 40, 1] or whatever as it is. Basically just do two detections, one for x>0. And to do detection on x<0, you just need to "flip" them to the front by multiplying x by -1 (or do x *=-1 and y *=-1 to rotate them to the front), then do detection on these points; finally just rotate the boxes back (you may have to write this code by yourself).

kwea123 avatar Mar 06 '19 12:03 kwea123

@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.

So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]

KIndly help

chowkamlee81 avatar Mar 06 '19 14:03 chowkamlee81

Ok thanks let me try with the suggested step..

chowkamlee81 avatar Mar 06 '19 14:03 chowkamlee81

Ok thanks let me try with the suggested step..

Hi, have you correctly detect the objects behind the camera? Would you provide some suggestions please?

dingfuzhou avatar May 07 '19 09:05 dingfuzhou