second.pytorch
second.pytorch copied to clipboard
How to detect all points?
Hi, I do the detection on VLP-16 data, result is shown in RVIZ and it's pretty good. But i have noticed that only the points with x > 0 are detected. Do you know how to make the net detecting all points?
an approach that works with voxelnet and should work with SECOND :
as far as i understand the model is trained for the 80x70x4 area in front of the car and inference produces labels only for the objects visible in image space .
to get inference data for all points you could use your 360° data four times. rotate the data by 0°/90°/180°/270°. then use the pre-processing step that reduces these point clouds to the part that is visible in the image. (or just cut everything that is not within -45° to 45° of the front view center ) run inference on all 4 reduced point clouds, rotate the predictions back accordingly. together these predictions should cover all objects within the 360° point cloud.
maybe 2 parts (original and 180° rotated) are enough.
@johleh thanks for your advice. That seems a reasonable way and i will try this.
I found that simply modifying the following ranges to the desired value in .config file can make the net detecting bigger area: point_cloud_range
anchor_ranges
post_center_limit_range
, but result is no quiet stable.
@Oofs you are right. Note that if you use anchor_generator_stride
(not recommend), you need to change offset
when change detection range.
In addition, the pretrained model encode the absolute location of voxels, so if you want to detect objects outside the camera range in KITTI, you may need to train a new model.
@traveller59 May I know why the range is [-3,1] for the z axis ([0,-40,-3, 70.4, 40, 1] in car.config)? According to the official documentation of Kitti, the z axis of the lidar coordinate is pointing upwards to the sky. Then why -3 is needed which seems evaluating the points under ground?
@bigsheep2012 The distribution of car bottom-center location is in [-3, 1]. you can draw the distributions.
velodyne sensor is located 1.73m above the ground (on top of the car). for detection of the points behind the car, maybe you can try to multiply the x coordinates by -1, do the detection, then transform the boxes back.
Hi @Oofs , it is great to see that you seem to also have tried to do some detections in ROS.
I ran SECOND as a ROS Node with one of the KITTI dataset and the results is at youtube link. This code is at repository link. I felt that the performance is not as expected and I might have done something wrong. Could you please help me check to see if you have any suggestions for improvement? Thank you very much.
Yuesong
@Oofs Hi, how does the network perform on 16 lines data? Thank you.
I have changed point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1]. But still iam not able to detect objects for X<0. KIndly suggest if anybody found the solutions...
@kwea123 @traveller59 @cedricxie kindly help us on this issue.
@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.
So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]
KIndly help
Don't modify the config, put the range to [0,-40,-3, 70.4, 40, 1] or whatever as it is. Basically just do two detections, one for x>0. And to do detection on x<0, you just need to "flip" them to the front by multiplying x by -1 (or do x *=-1 and y *=-1 to rotate them to the front), then do detection on these points; finally just rotate the boxes back (you may have to write this code by yourself).
@Oofs , how did you got detection result for X<0. KIndly suggest what parameters i need to change.
So far i used point_cloud_range : [-80, -69.12, -3, 80, 69.12, 1], post_center_limit_range: [-80, -69.12, -5, 80, 69.12, 5]
KIndly help
Ok thanks let me try with the suggested step..
Ok thanks let me try with the suggested step..
Hi, have you correctly detect the objects behind the camera? Would you provide some suggestions please?