Huang Junjie
Huang Junjie
Result from the checkpoint in paper NDS:0.3923 mAP:0.3121 mATE:0.6940 mASE:0.2716 mAOE:0.5226 mAVE:0.9024 mAAE:0.2465
@cherry-yh Yes, you can train bevdet without the data from lidar. However, the images are contained in the samples. The images in the sweeps are used in BEVDet4D. The difference...
@CiaranYoung please git clone https://github.com/HuangJunJie2017/BEVDet.git instead of using other code base if you are not familiar with the code.
> @HuangJunJie2017 Thank you so much for your great work. But I meet some problem when I reproduce your work. Could you please give me some advice? I use 4...
Reproduct after supporting bevpool https://github.com/HuangJunJie2017/BEVDet/commit/2e559ffdaa2cb3cbb5afe4b90f6dbd91f9954973 e18 mAP: 0.3046 mATE: 0.6813 mASE: 0.2778 mAOE: 0.5358 mAVE: 0.8527 mAAE: 0.2302 NDS: 0.3945 Eval time: 148.0s Per-class results: Object Class AP ATE ASE...
@MandyMo Thanks for you infos! The lower result is derived from that the parameter trans_only is set True in eval pipeline. I will fixed this soon. You just need to...
@huangzhengxiang Have you reproduced the inference result with the provided checkpoint?
@huangzhengxiang It seems ok. We haven''t tested/trained on nuScenes-mini so far. Please use nuScenes instead……
@lianqing11 Check if the loss is close to that in the provided log?
@thom966 you have to update two function to support kitti/bevdet: 1. update the get_data_info in kitti_dataset.py like this  ' if self.info_style=='BEVDet': camera2lidar = np.linalg.inv(rect @ Trv2c) input_dict['img_info']['CAM_FRONT']=dict(data_path = img_filename,...