PointRCNN
PointRCNN copied to clipboard
Training with 360 deg annotations and 360 deg inference
Since KITTI dataset 3d bbox labels are the ones only on cam_2, in some parts of code there are checks for valid gt by checking if it resides in the image (due to GT-AUG) such as https://github.com/sshaoshuai/PointRCNN/blob/master/lib/datasets/kitti_rcnn_dataset.py#L267 However, after commenting out that valid point selection, providing 360 labeled data, and extending the scope with negative z-coordinate in cam I still can't get 360 inference results. Only getting the front objects. Recently going through the Pointnet part, would be glad if anyone has any experience with such modification. Thanks
For anyone interested, the distance_based_proposal method https://github.com/sshaoshuai/PointRCNN/blob/master/lib/rpn/proposal_layer.py#L58 has to be updated. Such that negative values should be added to nms_range_list since it checks z in the cam dimension. I was able to get behind (negative z)detections afterward.
Hi, thanks for your comment, i'd wish i have seen your post earier, Great job! Could you make a PR to share your contributions? Although i think make another mask as codes suggested should work @bmankirLinker
Update: just as @bmankirLinker said, add negative values to nms_range_list and it worked, thanks!
Hey, what value are you using for nms_range_list? nms_range_list = [-40.0, 40.0, 80.0] should work, right?
Hey, what value are you using for
nms_range_list?nms_range_list = [-40.0, 40.0, 80.0]should work, right?
Depends on the z range you want to define, it indeed works, but you won't get the proposals further than -40 on the negative side.
@bmankirLinker Hey, can i ask what 360 dataset you used?