mmdetection3d
mmdetection3d copied to clipboard
Question: Inference alters range & rotation
The demo code for LiDAR only inference (demo/pcd_demo.py, and using suggested command flags as in the README demo section) yields an output that seems to rotate the original PCD and with only half the points (in front/behind the car). Wondering if there's a way to get the bounding boxes on the original data, for the full point cloud.
Many thanks.
I think the PCD won't be rotated during inference like pcd_demo.py, if you want to get the full point cloud, you can adjust the point_cloud_range in config.
Thanks @ZCMax. I resolved the PCD rotation issue by skipping the LiDAR --> Depth coordinate mode change in show_result_meshlab (I use open3d). The range was successfully adjusted using point_cloud_range. Following this, however, the bounding boxes appear in free space (no overlap with any points). Would there be transformations applied to the bounding boxes prior to show_result_meshlab? I use result and data directly obtained from inference detector. Thank you.
For example, running the demo in the getting_started.md but changing the point_cloud_range in configs/second/hv_second_secfpn_6x8_80e_kitti-3d-3class.py to [-20, -40, -3, 70.4, 40, 1] is sufficient to see a shift in prediction boxes. It seems this adjustment is not reflected in the predictions. Is there a way to adjust this in the config/elsewhere in code, or does post-prediction processing need to be done? Thanks.
I encountered an issue that I think may be related to this.
I run the demo script as described on the GETTING STARTED page on the included example KITTI cloud.
See the reference cam image:

Now, without any changes, I get the following results:

Note how the cars are captured very well - but the orientation is consistently off.
When I swap the x-dimension and y-dimension (i.e. indices 3 and 4) of the prediction bounding boxes (pred_bboxes) before the depth-cam conversion, I get the expected results:

my execution command:
python demo/pcd_demo.py demo/data/kitti/kitti_000008.bin configs/second/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class.py checkpoints/hv_second_secfpn_fp16_6x8_80e_kitti-3d-3class_20200925_110059-05f67bdf.pth --out-dir .. --show
It does not seem to be a visualization issue, debugging shows that the bounding boxes are already misspecified before visualizing them.
Is there something that I misunderstand or misuse? Or is this a bug?
Having same issue. Is this because we test demo using newest master but the model is trained on older version?

@fabianwindbacher Where do you swap x-dimension and y-dimension?
I did it here. https://github.com/open-mmlab/mmdetection3d/blob/9c7270d00dbdd0599b6b6bf816c3ff2dd17d4878/mmdet3d/apis/inference.py#L350
Do you use the latest master or v1.0.0rc0? We have not finished the model update after refactoring the coordinate systems, and you can train models by yourself to test it again. We are preparing all the updated models and would update them ASAP. Sorry for the inconvenience caused.
Do you use the latest master or v1.0.0rc0? We have not finished the model update after refactoring the coordinate systems, and you can train models by yourself to test it again. We are preparing all the updated models and would update them ASAP. Sorry for the inconvenience caused.
We use latest master, thanks for updating
Having same issue. Is this because we test demo using newest master but the model is trained on older version?
Hey have you solved the issue? Im facing the same issue as well
Some pretrained models have been updated. Please check them in #1369 and try to reproduce the demo with the updated models. Looking forward to your feedback.
Having same issue. Is this because we test demo using newest master but the model is trained on older version?
Hey have you solved the issue? Im facing the same issue as well
Have you solved the issue?
I found maybe there is a bug when convert pred_bbox in lidar coordinate system to that in depth coord system.
Just modify this line
yaw = yaw + np.pi / 2
to
yaw = -yaw + np.pi / 2
It will fix this bug.
Also appears to affect the groupfree3d model.
Having same issue. Is this because we test demo using newest master but the model is trained on older version?
Hey have you solved the issue? Im facing the same issue as well
Have you solved the issue? I found maybe there is a bug when convert pred_bbox in lidar coordinate system to that in depth coord system. Just modify this line
yaw = yaw + np.pi / 2toyaw = -yaw + np.pi / 2It will fix this bug.
@Zhangyongtao123 Hi, I also meet the same situation. Would you mind explaining the reason to change the yaw? The predictions are supposed to have the same coordinates with input points, i.e., the LiDAR mode. So, why do we have to perform the extra mode-trans?
@Tai-Wang Hi, are we planning to address this yaw bug?
As of Sep 17, I am using https://download.openmmlab.com/mmdetection3d/v0.1.0_models/second/hv_second_secfpn_6x8_80e_kitti-3d-car/hv_second_secfpn_6x8_80e_kitti-3d-car_20200620_230238-393f000c.pth, and for me yaw = -yaw + np.pi / 2 does not work

I need to update this line to yaw = -yaw

Sorry for the late reply. Please @ZCMax have a check and fix the bug if necessary.
Bug is still here.
Bumping this as I'm seeing the same behavior