SFD icon indicating copy to clipboard operation
SFD copied to clipboard

ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 12, 1])

Open HuangLLL123 opened this issue 1 year ago • 7 comments

Always reporting errors on different samples,

Has anyone encountered the same problem before?

微信截图1

微信截图2

HuangLLL123 avatar Jan 14 '24 14:01 HuangLLL123

hello,have you solved the problem? and how?

vacant-ztz avatar Apr 18 '24 10:04 vacant-ztz

hello,have you solved the problem? and how?

no, how about you now?

HuangLLL123 avatar May 14 '24 09:05 HuangLLL123

I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 19201080, kitti's is 1270400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image).

vacant-ztz avatar May 14 '24 10:05 vacant-ztz

I have solved this problem, after checking I found that because I am using a self-built dataset, the image size is not the same as kitti (mine is 1920_1080, kitti's is 1270_400), so I did not modify the input and output sizes when generating the depth map resulting in a pseudo-point cloud generated that does not correspond to my image. The specific solution is to modify the code when generating the depth map to 1920*1080, and modify the w,h parameter in the sfd_head.py at about line 505 (modify it to be slightly larger than the size of the input image).

I am using a self-built dataset,too.I changed the values of w and h(1280 *960) according to your suggestion, but the problem still exists, and I found that when I increase the batch size, the problem will be alleviated, but it still exists. Could you please tell me how do you modify the code to generate the depth map to 1920 *1080? Or do you have any other good suggestions? @vacant-ztz

HuangLLL123 avatar May 14 '24 15:05 HuangLLL123

If you want to modify the output size of the depth map, first, you need to modify the values of oheight, owidth, cwidth in SFD-TWISE-main/dataloaders/kitti_loader.py and make sure that they can be divisible by 16, and after that, you need to modify the size of pred_dep tensor in evaluate.py as the size of your output image. @HuangLLL123

vacant-ztz avatar May 22 '24 13:05 vacant-ztz

Thank you very much. I have also solved my problem through your method, but I have encountered a new problem when using my self-built dataset, The problem is as follows: File "/home/tianran/workdir/SFD/pcdet/models/roi_heads/target_assigner/proposal_target_layer.py", line 162, in subsample_rois raise NotImplementedError NotImplementedError maxoverlaps:(min=nan, max=nan) ERROR: FG=0, BG=0 I have tried many methods mentioned in the answers of other issues, such as normalizing point cloud features and reducing learning rates, but the problem has not been completely solved. Have you ever encountered this problem while using a self-built dataset? Could you please tell me your solution? @vacant-ztz

HuangLLL123 avatar May 25 '24 08:05 HuangLLL123