Multi-Task-Transformer icon indicating copy to clipboard operation
Multi-Task-Transformer copied to clipboard

TypeError: TaskPrompter.__init__() got an unexpected keyword argument 'default_cfg'

Open op1009 opened this issue 1 year ago • 2 comments

Steps done:

  1. Clone repo
  2. Download .pth.tar files
  3. Run below commands
CUDA_VISIBLE_DEVICES=0
!python3 inference.py --config_path=configs/pascal/pascal_vitLp16_taskprompter.yml --image_path=/content/Screenshot7.png --ckp_path=/content/Multi-Task-Transformer/TaskPrompter/InvPT_pascal_vitLp16.pth.tar --save_dir=output

Error

Traceback (most recent call last):
  File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 185, in <module>
    infer_one_image(args.image_path)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 141, in infer_one_image
    model = initialize_model(p, checkpoint_path)
  File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 60, in initialize_model
    model = get_model(p)
  File "/content/Multi-Task-Transformer/TaskPrompter/utils/common_config.py", line 79, in get_model
    backbone, backbone_channels = get_backbone(p)
  File "/content/Multi-Task-Transformer/TaskPrompter/utils/common_config.py", line 22, in get_backbone
    backbone = taskprompter_vit_large_patch16_384(p=p, pretrained=True, drop_path_rate=0.15, img_size=p.TRAIN.SCALE)
  File "/content/Multi-Task-Transformer/TaskPrompter/models/transformers/taskprompter.py", line 676, in taskprompter_vit_large_patch16_384
    model = _create_task_prompter('vit_large_patch16_384', pretrained=pretrained, **model_kwargs)
  File "/content/Multi-Task-Transformer/TaskPrompter/models/transformers/taskprompter.py", line 661, in _create_task_prompter
    model = build_model_with_cfg(
  File "/usr/local/lib/python3.10/dist-packages/timm/models/_builder.py", line 385, in build_model_with_cfg
    model = model_cls(**kwargs)
TypeError: TaskPrompter.__init__() got an unexpected keyword argument 'default_cfg'

Trying other solution from closed issue #10

CUDA_VISIBLE_DEVICES=0 
!python inference.py --image_path=/content/Screenshot7.png --ckp_path=/content/Multi-Task-Transformer/TaskPrompter/InvPT_pascal_vitLp16.pth.tar --save_dir=SAVE_DIR

Error

Traceback (most recent call last):
  File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 185, in <module>
    infer_one_image(args.image_path)
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/content/Multi-Task-Transformer/TaskPrompter/inference.py", line 121, in infer_one_image
    p = create_config(args.config_path, {'run_mode': 'infer'})
  File "/content/Multi-Task-Transformer/TaskPrompter/utils/config.py", line 94, in create_config
    with open(exp_file, 'r') as stream:
FileNotFoundError: [Errno 2] No such file or directory: './configs/pascal/pascal_vitLp16.yml'

Platform Google colab with T4 runtime

op1009 avatar Nov 19 '23 06:11 op1009

Hi, I'm not the author, but I encountered a similar error:

File "/workspace/container_test_folder/Multi-Task-Transformer/InvPT/models/transformers/vit.py", line 546, in _create_vision_transformer
    model = build_model_with_cfg(
 File "/opt/conda/lib/python3.10/site-packages/timm/models/helpers.py", line 537, in build_model_with_cfg
    model = model_cls(**kwargs) if model_cfg is None else model_cls(cfg=model_cfg, **kwargs)
TypeError: VisionTransformer.__init__() got an unexpected keyword argument 'default_cfg'

The error can be resolved by simply modifying default_cfg at line 548 in "InvPT/models/transformers/vit.py" to pretrained_cfg. I hope this solution helps you :)

Before:

model = build_model_with_cfg(
        VisionTransformer, variant, pretrained,
        default_cfg=default_cfg,
        representation_size=repr_size,
        pretrained_filter_fn=checkpoint_filter_fn,
        pretrained_custom_load='npz' in default_cfg['url'],
        **kwargs)

After:

model = build_model_with_cfg(
        VisionTransformer, variant, pretrained,
        pretrained_cfg=default_cfg,
        representation_size=repr_size,
        pretrained_filter_fn=checkpoint_filter_fn,
        pretrained_custom_load='npz' in default_cfg['url'],
        **kwargs)

KevinChen880723 avatar Dec 01 '23 08:12 KevinChen880723

@KevinChen880723 Thanks for your reply.

Can you help me with some other related issue, I am trying to detect 3D-bounding box over objects, how to do that ? After 3d-bounding box, detect monocular depth of the detected objects.

op1009 avatar Dec 02 '23 17:12 op1009