Yifan Lu
Yifan Lu
对的,让两个模态的encoder,自己端到端的学习特征
https://ucla.app.box.com/v/UCLA-MobilityLab-OPV2V/folder/279976559690 Hi, the `bev_visibility.png` is provided by the OPV2V authors in `additional-001.zip`. You need to download them and extract them together with OPV2V
Hi, HEAL的论文和这个repo是这样的设定。但你也可以修改代码,让一个agent同时使用LiDAR和camera
glad to hear :)
Awesome! Thanks for your rapid response and valuable advice! I will try it!
The training burden is not very heavy. Training McNeRF / 3D Gaussian Splatting for one scene requires a single GPU with less than 16GB GPU memory, and it can usually...
This is possible. 3DGS has different reconstruction performance in different scenes, and some scene can be particularly difficult. You can adjust some hyperparameters to optimize the results, but note that...
确实也不排除这个scene里colmap标出来的位姿比原来的效果要差的可能。可以尝试在config里位姿直接用`cams_meta_waymo.npy`,这是从waymo原始数据集里提取出来的位姿 ``` data `-- waymo_multi_view |-- ... `-- segment-1172406780360799916_1660_000_1680_000_with_camera_labels |-- 3d_boxes.npy # 3d bounding boxes of the first frame |-- images # a clip of waymo images used in chatsim...
Hi, HEAL does not implement the full where2comm, there is no confidence based feature selection, just the transformer encoder based feature fusion code snippet is included. The fusion code can...
Sorry for the late reply, you need to dive deep a little bit to the `affine_grid` and `grid_sample` function, which treat the BEV feature map as an image wrapping transformation....