bevfusion
bevfusion copied to clipboard
Tensor size mismatch
Is this an issue anyone here is familiar with ? Just using nuscenes mini dataset
File "mmdet3d/models/utils/transformer.py", line 69, in with_pos_embed return tensor if pos_embed is None else tensor + pos_embed RuntimeError: The size of tensor a (16384) must match the size of tensor b (65536) at non-singleton dimension 0
Could u give more details about your error problem?
Is this an issue anyone here is familiar with ? Just using nuscenes mini dataset
File "mmdet3d/models/utils/transformer.py", line 69, in with_pos_embed return tensor if pos_embed is None else tensor + pos_embed RuntimeError: The size of tensor a (16384) must match the size of tensor b (65536) at non-singleton dimension 0
HI, @alexzhuuuu @AlexIlis
I face the problem, too. I'm running a lidar-only model and the backbone is pointpillars. Here is the running code:
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/pointpillars.yaml
Then I got this error: File "/home/wyf/bevfusion-full/mmdet3d/models/utils/transformer.py", line 69, in with_pos_embed return tensor if pos_embed is None else tensor + pos_embed RuntimeError: The size of tensor a (16384) must match the size of tensor b (65536) at non-singleton dimension 0
I never modified the code.
ask for some help.
HI, @alexzhuuuu @AlexIlis
I face the problem, too. I'm running a lidar-only model and the backbone is pointpillars. Here is the running code:
torchpack dist-run -np 1 python tools/train.py configs/nuscenes/det/transfusion/secfpn/lidar/pointpillars.yaml
Then I got this error: File "/home/wyf/bevfusion-full/mmdet3d/models/utils/transformer.py", line 69, in with_pos_embed return tensor if pos_embed is None else tensor + pos_embed RuntimeError: The size of tensor a (16384) must match the size of tensor b (65536) at non-singleton dimension 0
I never modified the code.
ask for some help.
Hello, I'm not sure if you have solved the problem or not. I have faced the same issue, too. My solution is to add the following content to the pointpillars.yaml
file:
heads:
object:
in_channels: 384
train_cfg:
grid_size: [512, 512, 1]
out_size_factor: 4
test_cfg:
grid_size: [512, 512, 1] # add this line, the bev grid size needs to align with the lidar feature map
out_size_factor: 4
bbox_coder:
out_size_factor: 4
Hope it works!
Thank you for your interest in our project. This repository is no longer actively maintained, so we will be closing this issue. Please refer to the amazing implementation at MMDetection3D. Thank you again!