Yan bo
Yan bo
> I find the `3d_box_dim` is 9. I konw the meaning of the first 7 values. I wonder what the last 2 values represent. Hello, have you trained this model?...
> In README.md, virtual points samples is mentioned. And the virtual points samples is saved in Baidu cloud storage. > > ``` > **Step 2**: Download preprocessed [virtual points samples](https://pan.baidu.com/s/1IxqcGxNCFnmSZw7Dlu3Xig?pwd=9xcb)(extraction...
> Hi @SxJyJay great work! I'm wondering if you have the time to clean the related code, and do you plan to release it? If you don't have time, could...
> > For `nms_type`, we follow the configuration of TransFusion. It seems that you do not employ nms operation due to the object query design. > > YES. I figure...
> Hi! > > Have you implemented the visiualization part? How can I implement this part? > > Wish for your response! Hello, have you trained this model? I'm using...
> > I haven't tried that. But theoretically, virtual points can be generated with paired LiDAR and camera images. > > Thanks a lot! Hello, have you trained this model?...
> Hi, when I use the following command for evaluation: > > # Evaluation > sh . /tools/dist_test.sh . /configs/MSMDFusion_nusc_voxel_LC.py c k p t p a t h 2 --eval...
> > 1. The normalizer is used to control the range of point cloud coordinate values. > > 2. You can refer to MVP for the specific meaning of these...
> Hello! I want to ask about the differences between TTA and the base version. And why can the TTA version improve such a large margin? Hello, have you trained...
> fg_info = img_metas[sample_idx]['foreground2D_info'] fg_pxl = fg_info['fg_pixels'][view_idx] fg_depth = torch.from_numpy(fg_pxl[:,2]).to(device) in line 207 in MSMDFusionDetector and fg_real_pixels = img_metas[i]['foreground2D_info']['fg_real_pixels'] depth = fg_real_pxl[:,2] > > code above shows that you have...