bevfusion icon indicating copy to clipboard operation
bevfusion copied to clipboard

TypeError: get_cam_feats() takes 3 positional arguments but 4 were given

Open WangHeng1021 opened this issue 1 year ago • 13 comments

Hello [User's Name or Team's Name],

I hope this message finds you well. First and foremost, I want to express my appreciation for your work and contributions to this project. When I tried to visualize the results of my training by the following command, I ran into the following error. Could you please help me with it?

torchpack dist-run -np 1 python tools/visualize.py train_result/configs.yaml --mode pred --checkpoint train_result/latest.pth --bbox-score 0.2 --out-dir vis_result

/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:2157.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 2023-10-18 12:23:02,407 - mmdet - INFO - load checkpoint from local path: pretrained/swint-nuimages-pretrained.pth load checkpoint from local path: train_result/latest.pth ^-^ 0%| | 0/81 [00:01<?, ?it/s] Traceback (most recent call last): File "tools/visualize.py", line 167, in main() File "tools/visualize.py", line 89, in main outputs = model(**data) File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 886, in forward output = self.module(*inputs[0], **kwargs[0]) File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func return old_func(*args, **kwargs) File "/home/qqq/wh/bevfusion/mmdet3d/models/fusion_models/bevfusion.py", line 253, in forward outputs = self.forward_single( File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 98, in new_func return old_func(*args, **kwargs) File "/home/qqq/wh/bevfusion/mmdet3d/models/fusion_models/bevfusion.py", line 301, in forward_single feature = self.extract_camera_features( File "/home/qqq/wh/bevfusion/mmdet3d/models/fusion_models/bevfusion.py", line 133, in extract_camera_features x = self.encoders["camera"]["vtransform"]( File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/qqq/wh/bevfusion/mmdet3d/models/vtransforms/depth_lss.py", line 100, in forward x = super().forward(*args, **kwargs) File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func return old_func(*args, **kwargs) File "/home/qqq/wh/bevfusion/mmdet3d/models/vtransforms/base.py", line 350, in forward x = self.get_cam_feats(img, depth, mats_dict) File "/home/qqq/miniconda3/envs/bevfusion_mit/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 186, in new_func return old_func(*args, **kwargs) TypeError: get_cam_feats() takes 3 positional arguments but 4 were given

WangHeng1021 avatar Oct 18 '23 05:10 WangHeng1021

the same problem

heli223 avatar Oct 25 '23 03:10 heli223

the same problem

iyaqiao avatar Nov 01 '23 01:11 iyaqiao

Hi, I previously encountered the same issue. The following was my solution:

Check if you have accidentally removed/commented the following lines under configs/default.yaml. It should be present for the model to be wrapped in fp16.

fp16:
  loss_scale: 
    growth_interval: 2000

Alternatively, in the python script that you are executing, print out cfg['fp16']. It shouldn't be None. If it is None, add in the above lines in configs/default.yaml.

Hope it helps!

W6WM9M avatar Nov 17 '23 06:11 W6WM9M

Hi, I previously encountered the same issue. The following was my solution:

Check if you have accidentally removed/commented the following lines under configs/default.yaml. It should be present for the model to be wrapped in fp16.

fp16:
  loss_scale: 
    growth_interval: 2000

Alternatively, in the python script that you are executing, print out cfg['fp16']. It shouldn't be None. If it is None, add in the above lines in configs/default.yaml.

Hope it helps!

I still encountered this problem with completely right configs/default.yaml,and I think that configs/default.yaml may actually not used in the visualization process at all.

Zhanfury avatar Dec 06 '23 07:12 Zhanfury

Have you found a solution to this problem? I am having the same problem.

Hi, I previously encountered the same issue. The following was my solution: Check if you have accidentally removed/commented the following lines under configs/default.yaml. It should be present for the model to be wrapped in fp16.

fp16:
  loss_scale: 
    growth_interval: 2000

Alternatively, in the python script that you are executing, print out cfg['fp16']. It shouldn't be None. If it is None, add in the above lines in configs/default.yaml. Hope it helps!

I still encountered this problem with completely right configs/default.yaml,and I think that configs/default.yaml may actually not used in the visualization process at all.

J4nekT avatar Dec 13 '23 02:12 J4nekT

Have you found a solution to this problem? I still encountered this problem with completely right configs/default.yaml

NuanBaobao avatar Dec 28 '23 02:12 NuanBaobao

Have you found a solution to this problem? I still have the same problem.

Surtr07 avatar Jan 03 '24 05:01 Surtr07

Have you found a solution to this problem? I am having the same problem.

Hi, I previously encountered the same issue. The following was my solution: Check if you have accidentally removed/commented the following lines under configs/default.yaml. It should be present for the model to be wrapped in fp16.

fp16:
  loss_scale: 
    growth_interval: 2000

Alternatively, in the python script that you are executing, print out cfg['fp16']. It shouldn't be None. If it is None, add in the above lines in configs/default.yaml. Hope it helps!

I still encountered this problem with completely right configs/default.yaml,and I think that configs/default.yaml may actually not used in the visualization process at all.

Have you found a solution to this problem? I still have the same problem.

971022jing avatar Jan 03 '24 17:01 971022jing

Have you found a solution to this problem? I still have the same problem.

liluxing153 avatar Jan 08 '24 13:01 liluxing153

if args.mode == "pred":
    model = build_model(cfg.model)
    load_checkpoint(model, args.checkpoint, map_location="cpu")

modify to:

if args.mode == "pred":
    model = build_model(cfg.model)
    fp16_cfg = cfg.get("fp16", None)
    if fp16_cfg is not None:
        wrap_fp16_model(model)
    load_checkpoint(model, args.checkpoint, map_location="cpu")

imliupu avatar Jan 11 '24 07:01 imliupu

I solved the problem by change the code in /mmdet3d/models/vtransforms/base.py ,delete mats_dict 2 place line 349 x = self.get_cam_feats(img, depth, mats_dict)change to x = self.get_cam_feats(img, depth) and line 222 around it will be ok. but i dont know the effect of the changing.

flyflyfly0120 avatar Mar 05 '24 10:03 flyflyfly0120

i solve the question by the method: mmdet3d>models>vtransforms>base.py 350 line x = self.get_cam_feats(img, depth, mats_dict) modify to x = self.get_cam_feats(img, depth), then run you python script

lin0711 avatar Mar 31 '24 14:03 lin0711

wrap_fp16_model

where is wrap_fp16_model()?

hitbuyi avatar Apr 18 '24 17:04 hitbuyi

wrap_fp16_model

where is wrap_fp16_model()? add this at top of file to import the related library: from mmcv.runner import wrap_fp16_model

freejumperd avatar Jun 08 '24 20:06 freejumperd