Co-DETR icon indicating copy to clipboard operation
Co-DETR copied to clipboard

pytorch to onnx

Open chenzx2 opened this issue 1 year ago • 50 comments

/home/aigroup/chenzx/ws_internImage/bin/python3.8 /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py:284: UserWarning: Arguments like --mean, --std, --dataset would be parsed directly from config file and are deprecated and will be removed in future releases. warnings.warn('Arguments like --mean, --std, --dataset would be
/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/onnx/symbolic.py:481: UserWarning: DeprecationWarning: This function will be deprecated in future. Welcome to use the unified model deployment toolbox MMDeploy: https://github.com/open-mmlab/mmdeploy warnings.warn(msg) /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 2023-08-08 16:30:39,106 - mmcv - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01} 2023-08-08 16:30:39,108 - mmcv - INFO - rpn_conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,108 - mmcv - INFO - rpn_conv.bias - torch.Size([256]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,108 - mmcv - INFO - rpn_cls.weight - torch.Size([9, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,108 - mmcv - INFO - rpn_cls.bias - torch.Size([9]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,108 - mmcv - INFO - rpn_reg.weight - torch.Size([36, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,108 - mmcv - INFO - rpn_reg.bias - torch.Size([36]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,185 - mmcv - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}] 2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.fc_cls.weight - torch.Size([81, 1024]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.fc_cls.bias - torch.Size([81]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.fc_reg.weight - torch.Size([320, 1024]): NormalInit: mean=0, std=0.001, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.fc_reg.bias - torch.Size([320]): NormalInit: mean=0, std=0.001, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.shared_fcs.0.weight - torch.Size([1024, 12544]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.shared_fcs.0.bias - torch.Size([1024]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.shared_fcs.1.weight - torch.Size([1024, 1024]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 16:30:39,236 - mmcv - INFO - bbox_head.shared_fcs.1.bias - torch.Size([1024]): XavierInit: gain=1, distribution=uniform, bias=0

/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: num_anchors is deprecated, for consistency or also use num_base_priors instead warnings.warn('DeprecationWarning: num_anchors is deprecated, ' 2023-08-08 16:30:39,248 - mmcv - INFO - initialize CoATSSHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'atss_cls', 'std': 0.01, 'bias_prob': 0.01}} 2023-08-08 16:30:39,255 - mmcv - INFO - cls_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,255 - mmcv - INFO - cls_convs.0.gn.weight - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - cls_convs.0.gn.bias - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - reg_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,255 - mmcv - INFO - reg_convs.0.gn.weight - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - reg_convs.0.gn.bias - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - atss_cls.weight - torch.Size([80, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459

2023-08-08 16:30:39,255 - mmcv - INFO - atss_cls.bias - torch.Size([80]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459

2023-08-08 16:30:39,255 - mmcv - INFO - atss_reg.weight - torch.Size([4, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,255 - mmcv - INFO - atss_reg.bias - torch.Size([4]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,255 - mmcv - INFO - atss_centerness.weight - torch.Size([1, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,255 - mmcv - INFO - atss_centerness.bias - torch.Size([1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 16:30:39,255 - mmcv - INFO - scales.0.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - scales.1.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - scales.2.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - scales.3.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - scales.4.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 16:30:39,255 - mmcv - INFO - scales.5.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

load checkpoint from local path: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/model/co_dino_5scale_swin_large_3x_coco.pth /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:423: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if W % self.patch_size[1] != 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:425: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if H % self.patch_size[0] != 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:362: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! Hp = int(np.ceil(H / self.window_size)) * self.window_size /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:363: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! Wp = int(np.ceil(W / self.window_size)) * self.window_size /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert L == H * W, "input feature has wrong size" /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:66: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! B = int(windows.shape[0] / (H * W / window_size / window_size)) /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:241: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if pad_r > 0 or pad_b > 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:272: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert L == H * W, "input feature has wrong size" /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:277: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! pad_input = (H % 2 == 1) or (W % 2 == 1) /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:278: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if pad_input: ============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Traceback (most recent call last): File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 320, in pytorch2onnx( File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 90, in pytorch2onnx torch.onnx.export( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export _export( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 1274, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 133, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 124, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 119, in new_func return old_func(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/base.py", line 169, in forward return self.onnx_export(img[0], img_metas[0]) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/co_detr.py", line 382, in onnx_export outs = self.query_head(x) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) TypeError: forward() missing 1 required positional argument: 'img_metas'

chenzx2 avatar Aug 08 '23 08:08 chenzx2

I have modified the onnx_export function, you can try it again

TempleX98 avatar Aug 08 '23 08:08 TempleX98

I have modified the onnx_export function, you can try it again

the problem is solved,there is a bug

/home/aigroup/chenzx/ws_internImage/bin/python3.8 /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py:284: UserWarning: Arguments like --mean, --std, --dataset would be parsed directly from config file and are deprecated and will be removed in future releases. warnings.warn('Arguments like --mean, --std, --dataset would be
/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/onnx/symbolic.py:481: UserWarning: DeprecationWarning: This function will be deprecated in future. Welcome to use the unified model deployment toolbox MMDeploy: https://github.com/open-mmlab/mmdeploy warnings.warn(msg) /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 2023-08-08 17:01:24,154 - mmcv - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01} 2023-08-08 17:01:24,156 - mmcv - INFO - rpn_conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,156 - mmcv - INFO - rpn_conv.bias - torch.Size([256]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,156 - mmcv - INFO - rpn_cls.weight - torch.Size([9, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,156 - mmcv - INFO - rpn_cls.bias - torch.Size([9]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,156 - mmcv - INFO - rpn_reg.weight - torch.Size([36, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,156 - mmcv - INFO - rpn_reg.bias - torch.Size([36]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,232 - mmcv - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}] 2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.fc_cls.weight - torch.Size([81, 1024]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.fc_cls.bias - torch.Size([81]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.fc_reg.weight - torch.Size([320, 1024]): NormalInit: mean=0, std=0.001, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.fc_reg.bias - torch.Size([320]): NormalInit: mean=0, std=0.001, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.shared_fcs.0.weight - torch.Size([1024, 12544]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.shared_fcs.0.bias - torch.Size([1024]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.shared_fcs.1.weight - torch.Size([1024, 1024]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 17:01:24,283 - mmcv - INFO - bbox_head.shared_fcs.1.bias - torch.Size([1024]): XavierInit: gain=1, distribution=uniform, bias=0

/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: num_anchors is deprecated, for consistency or also use num_base_priors instead warnings.warn('DeprecationWarning: num_anchors is deprecated, ' 2023-08-08 17:01:24,294 - mmcv - INFO - initialize CoATSSHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'atss_cls', 'std': 0.01, 'bias_prob': 0.01}} 2023-08-08 17:01:24,299 - mmcv - INFO - cls_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,299 - mmcv - INFO - cls_convs.0.gn.weight - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - cls_convs.0.gn.bias - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - reg_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,299 - mmcv - INFO - reg_convs.0.gn.weight - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - reg_convs.0.gn.bias - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - atss_cls.weight - torch.Size([80, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459

2023-08-08 17:01:24,299 - mmcv - INFO - atss_cls.bias - torch.Size([80]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459

2023-08-08 17:01:24,299 - mmcv - INFO - atss_reg.weight - torch.Size([4, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,299 - mmcv - INFO - atss_reg.bias - torch.Size([4]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,299 - mmcv - INFO - atss_centerness.weight - torch.Size([1, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,299 - mmcv - INFO - atss_centerness.bias - torch.Size([1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:01:24,299 - mmcv - INFO - scales.0.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - scales.1.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - scales.2.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - scales.3.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - scales.4.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:01:24,299 - mmcv - INFO - scales.5.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

load checkpoint from local path: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/model/co_dino_5scale_swin_large_3x_coco.pth /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:423: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if W % self.patch_size[1] != 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:425: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if H % self.patch_size[0] != 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:362: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! Hp = int(np.ceil(H / self.window_size)) * self.window_size /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:363: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! Wp = int(np.ceil(W / self.window_size)) * self.window_size /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert L == H * W, "input feature has wrong size" /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:66: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! B = int(windows.shape[0] / (H * W / window_size / window_size)) /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:241: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if pad_r > 0 or pad_b > 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:272: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert L == H * W, "input feature has wrong size" /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:277: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! pad_input = (H % 2 == 1) or (W % 2 == 1) /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:278: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if pad_input: Traceback (most recent call last): File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 320, in pytorch2onnx( File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 90, in pytorch2onnx torch.onnx.export( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export _export( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 1274, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 133, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 124, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 119, in new_func return old_func(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/base.py", line 169, in forward return self.onnx_export(img[0], img_metas[0]) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/co_detr.py", line 382, in onnx_export outs = self.query_head.forward_onnx(x, img_metas)[:2] File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/detr_head.py", line 724, in forward_onnx return multi_apply(self.forward_single_onnx, feats, img_metas_list) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/core/utils/misc.py", line 30, in multi_apply return tuple(map(list, zip(*map_results))) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/detr_head.py", line 753, in forward_single_onnx x = self.input_proj(x) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'CoDINOHead' object has no attribute 'input_proj' ============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

chenzx2 avatar Aug 08 '23 09:08 chenzx2

I fixed it like this 企业微信截图_16914854619898

here is my new problem

/home/aigroup/chenzx/ws_internImage/bin/python3.8 /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/init.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details. warnings.warn( /home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py:284: UserWarning: Arguments like --mean, --std, --dataset would be parsed directly from config file and are deprecated and will be removed in future releases. warnings.warn('Arguments like --mean, --std, --dataset would be
/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/onnx/symbolic.py:481: UserWarning: DeprecationWarning: This function will be deprecated in future. Welcome to use the unified model deployment toolbox MMDeploy: https://github.com/open-mmlab/mmdeploy warnings.warn(msg) /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/functional.py:504: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ../aten/src/ATen/native/TensorShape.cpp:3483.) return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined] 2023-08-08 17:02:19,967 - mmcv - INFO - initialize RPNHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01} 2023-08-08 17:02:19,969 - mmcv - INFO - rpn_conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:19,969 - mmcv - INFO - rpn_conv.bias - torch.Size([256]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:19,969 - mmcv - INFO - rpn_cls.weight - torch.Size([9, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:19,969 - mmcv - INFO - rpn_cls.bias - torch.Size([9]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:19,969 - mmcv - INFO - rpn_reg.weight - torch.Size([36, 256, 1, 1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:19,969 - mmcv - INFO - rpn_reg.bias - torch.Size([36]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,044 - mmcv - INFO - initialize Shared2FCBBoxHead with init_cfg [{'type': 'Normal', 'std': 0.01, 'override': {'name': 'fc_cls'}}, {'type': 'Normal', 'std': 0.001, 'override': {'name': 'fc_reg'}}, {'type': 'Xavier', 'distribution': 'uniform', 'override': [{'name': 'shared_fcs'}, {'name': 'cls_fcs'}, {'name': 'reg_fcs'}]}] 2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.fc_cls.weight - torch.Size([81, 1024]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.fc_cls.bias - torch.Size([81]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.fc_reg.weight - torch.Size([320, 1024]): NormalInit: mean=0, std=0.001, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.fc_reg.bias - torch.Size([320]): NormalInit: mean=0, std=0.001, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.shared_fcs.0.weight - torch.Size([1024, 12544]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.shared_fcs.0.bias - torch.Size([1024]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.shared_fcs.1.weight - torch.Size([1024, 1024]): XavierInit: gain=1, distribution=uniform, bias=0

2023-08-08 17:02:20,095 - mmcv - INFO - bbox_head.shared_fcs.1.bias - torch.Size([1024]): XavierInit: gain=1, distribution=uniform, bias=0

/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/anchor_head.py:116: UserWarning: DeprecationWarning: num_anchors is deprecated, for consistency or also use num_base_priors instead warnings.warn('DeprecationWarning: num_anchors is deprecated, ' 2023-08-08 17:02:20,106 - mmcv - INFO - initialize CoATSSHead with init_cfg {'type': 'Normal', 'layer': 'Conv2d', 'std': 0.01, 'override': {'type': 'Normal', 'name': 'atss_cls', 'std': 0.01, 'bias_prob': 0.01}} 2023-08-08 17:02:20,111 - mmcv - INFO - cls_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,111 - mmcv - INFO - cls_convs.0.gn.weight - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,111 - mmcv - INFO - cls_convs.0.gn.bias - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,111 - mmcv - INFO - reg_convs.0.conv.weight - torch.Size([256, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,111 - mmcv - INFO - reg_convs.0.gn.weight - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,111 - mmcv - INFO - reg_convs.0.gn.bias - torch.Size([256]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,111 - mmcv - INFO - atss_cls.weight - torch.Size([80, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459

2023-08-08 17:02:20,111 - mmcv - INFO - atss_cls.bias - torch.Size([80]): NormalInit: mean=0, std=0.01, bias=-4.59511985013459

2023-08-08 17:02:20,111 - mmcv - INFO - atss_reg.weight - torch.Size([4, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,111 - mmcv - INFO - atss_reg.bias - torch.Size([4]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,111 - mmcv - INFO - atss_centerness.weight - torch.Size([1, 256, 3, 3]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,111 - mmcv - INFO - atss_centerness.bias - torch.Size([1]): NormalInit: mean=0, std=0.01, bias=0

2023-08-08 17:02:20,111 - mmcv - INFO - scales.0.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,111 - mmcv - INFO - scales.1.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,112 - mmcv - INFO - scales.2.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,112 - mmcv - INFO - scales.3.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,112 - mmcv - INFO - scales.4.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

2023-08-08 17:02:20,112 - mmcv - INFO - scales.5.scale - torch.Size([]): The value is the same before and after calling init_weights of CoATSSHead

load checkpoint from local path: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/model/co_dino_5scale_swin_large_3x_coco.pth The model and loaded state dict do not match exactly

missing keys in source state_dict: query_head.input_proj.weight, query_head.input_proj.bias, query_head.fc_cls.weight, query_head.fc_cls.bias, query_head.reg_ffn.layers.0.0.weight, query_head.reg_ffn.layers.0.0.bias, query_head.reg_ffn.layers.1.weight, query_head.reg_ffn.layers.1.bias, query_head.fc_reg.weight, query_head.fc_reg.bias

/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:423: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if W % self.patch_size[1] != 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:425: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if H % self.patch_size[0] != 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:362: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! Hp = int(np.ceil(H / self.window_size)) * self.window_size /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:363: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! Wp = int(np.ceil(W / self.window_size)) * self.window_size /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:203: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert L == H * W, "input feature has wrong size" /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:66: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! B = int(windows.shape[0] / (H * W / window_size / window_size)) /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:241: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if pad_r > 0 or pad_b > 0: /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:272: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert L == H * W, "input feature has wrong size" /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:277: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! pad_input = (H % 2 == 1) or (W % 2 == 1) /home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/swin_transformer.py:278: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if pad_input: /home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/cnn/bricks/wrappers.py:45: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if x.numel() == 0 and obsolete_torch_version(TORCH_VERSION, (1, 4)): Traceback (most recent call last): File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 320, in pytorch2onnx( File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/tools/deployment/pytorch2onnx.py", line 90, in pytorch2onnx torch.onnx.export( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 506, in export _export( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 989, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 1274, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 133, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/jit/_trace.py", line 124, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 119, in new_func return old_func(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/base.py", line 169, in forward return self.onnx_export(img[0], img_metas[0]) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/detectors/co_detr.py", line 382, in onnx_export outs = self.query_head.forward_onnx(x, img_metas)[:2] File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/detr_head.py", line 724, in forward_onnx return multi_apply(self.forward_single_onnx, feats, img_metas_list) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/core/utils/misc.py", line 30, in multi_apply return tuple(map(list, zip(*map_results))) File "/home/aigroup/chenzx/ws_internImage/code/Co-DETR/mmdet/models/dense_heads/detr_head.py", line 753, in forward_single_onnx x = self.input_proj(x) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1488, in _slow_forward result = self.forward(*input, **kwargs) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/mmcv/cnn/bricks/wrappers.py", line 59, in forward return super().forward(x) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/home/aigroup/chenzx/ws_internImage/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [256, 2048, 1, 1], expected input[1, 256, 200, 304] to have 2048 channels, but got 256 channels instead ============= Diagnostic Run torch.onnx.export version 2.0.0+cu117 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

chenzx2 avatar Aug 08 '23 09:08 chenzx2

Is this problem solved?

jielanZhang avatar Sep 05 '23 14:09 jielanZhang

Is this problem solved?

没有解决,现在看到工程项目又更新了,我找个时间更新项目再试试

chenzx2 avatar Sep 06 '23 03:09 chenzx2

Is this problem solved?

没有解决,现在看到工程项目又更新了,我找个时间更新项目再试试

@TempleX98 更新了以后还是有这个问题 TAT

return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [256, 2048, 1, 1], expected input[1, 256, 200, 304] to have 2048 channels, but got 256 channels instead

xueyingliu avatar Sep 06 '23 06:09 xueyingliu

@xueyingliu, @chenzx2, @jielanZhang, Hi, I am sorry that there are some unsolved torch2onnx issues. Our repo is implemented using an older version of mmdet v2.25, which no longer maintains the feature of model export. Co-DETR has been recently incorporated into the official mmdet v3.x repo, and you can use this official implementation as well as MMDeploy for model export.

TempleX98 avatar Sep 06 '23 06:09 TempleX98

@TempleX98 I have tried your suggestion but I encounter some problems but when verify the documentation in mmdeploy, I found that CO-DETR is not supported, here is the list of the supported models https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/03-benchmark/supported_models.md

MarouaneMja avatar Sep 15 '23 12:09 MarouaneMja

Hello,

Any updates about the model exportation to onnx , please ?

MarouaneMja avatar Oct 05 '23 08:10 MarouaneMja

@MarouaneMja Hi, I find this PR (https://github.com/open-mmlab/mmdetection/pull/10910) of mmdet v3.x supports onnx export and hope this can help you.

TempleX98 avatar Oct 05 '23 12:10 TempleX98

Hi @TempleX98 , thank you I will look it up

MarouaneMja avatar Oct 09 '23 08:10 MarouaneMja

Any updates? Does MMdeploy already support ?

Mayyyybe avatar Nov 08 '23 08:11 Mayyyybe

@Mayyyybe The inference architecture of Co-DINO is the same as DINO. And MMdeploy supports the model export of DINO method.

TempleX98 avatar Nov 08 '23 13:11 TempleX98

Hi @TempleX98 , I managed to export the CO-DETR to onnx using mmdeploy as you suggested, however the SoftNonMaxSuppression is not supported by onnxruntime failed:Fatal error: SoftNonMaxSuppression is not a registered function/op, do you have any suggestion how to bypass this problem?

MarouaneMja avatar Nov 10 '23 12:11 MarouaneMja

Hi @TempleX98 , I managed to export the CO-DETR to onnx using mmdeploy as you suggested, however the SoftNonMaxSuppression is not supported by onnxruntime failed:Fatal error: SoftNonMaxSuppression is not a registered function/op, do you have any suggestion how to bypass this problem?

Just remove the NMS operation in the config.

TempleX98 avatar Nov 10 '23 13:11 TempleX98

Thank you for your help @TempleX98 , it worked. However when I launch infernce with triton inference server , the onnx model takes more space in gpu memory than using a simple Python backend which very stange lolll

MarouaneMja avatar Nov 10 '23 16:11 MarouaneMja

嗨,我设法按照您的建议使用 mmdeploy 将 CO-DETR 导出到 onnx,但是 onnxruntime 不支持 SoftNonMaxSuppression ,您有什么建议如何绕过这个问题吗?failed:Fatal error: SoftNonMaxSuppression is not a registered function/op Hello, I am also trying to export CO-DETR to onnx using mmdeploy. I would like to ask which model_cfg you are using is“https://github.com/RunningLeon/mmdetection/blob/support_dino_onnx/projects/CO-DETR/configs/codino/co_dino_5scale_swin_l_16xb1_16e_o365tococo.py “ or ”https://github.com/Sense-X/Co-DETR/blob/main/projects/configs/co_dino/co_dino_5scale_swin_large_16e_o365tococo.py“
And could you please share the deploy_cfg for me to learn it.I would be very grateful if you could help me!

xinlin-xiao avatar Nov 15 '23 07:11 xinlin-xiao

@xinlin-xiao I am using the first config https://github.com/RunningLeon/mmdetection/blob/support_dino_onnx/projects/CO-DETR/configs/codino/co_dino_5scale_swin_l_16xb1_16e_o365tococo.py “, for the deploy_config I am using "detection_onnxruntime_dynamic.py" in mmdeploy

MarouaneMja avatar Nov 15 '23 09:11 MarouaneMja

@xinlin-xiao I am using the first config https://github.com/RunningLeon/mmdetection/blob/support_dino_onnx/projects/CO-DETR/configs/codino/co_dino_5scale_swin_l_16xb1_16e_o365tococo.py “, for the deploy_config I am using "detection_onnxruntime_dynamic.py" in mmdeploy

Thank you for your help!But,I trying to export CO-DETR to onnx using mmdeploy in this: python3 /root/workspace/mmdeploy/tools/deploy.py \ /root/workspace/mmdeploy/configs/mmdet/detection/detection_onnxruntime_dynamic.py\ /mnt/data/train-yolov7-images/pytorch2onn-mmdeploy/co_dino-new/co_dino_5scale_swin_l_16xb1_16e_o365tococo.py \ /mnt/data/train-yolov7-images/pytorch2onn-mmdeploy/co_dino_5scale_swin_large_16e_o365tococo.pth\ /root/workspace/mmdetection-support_dino_onnx/demo/demo.jpg \ --work-dir /mnt/data/train-yolov7-images/pytorch2onn-mmdeploy \ --device cuda \ --dump-info It reported an error: `============= Diagnostic Run torch.onnx.export version 2.0.0+cu118 ============= verbose: False, log level: Level.ERROR ======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================

Process Process-2: Traceback (most recent call last): File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call ret = func(*args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx export( File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in wrap return self.call_function(func_name, *args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function return self.call_function_local(func_name, *args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local return pipe_caller(*args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call ret = func(*args, **kwargs) File "/root/workspace/mmdeploy/mmdeploy/apis/onnx/export.py", line 138, in export torch.onnx.export( File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 506, in export _export( File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1548, in _export graph, params_dict, torch_out = _model_to_graph( File "/root/workspace/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 27, in model_to_graph__custom_optimizer graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1113, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 989, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1268, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 121, in wrapper out_vars, _ = _flatten(outs) RuntimeError: Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DetDataSample 11/16 16:55:21 - mmengine - ERROR - /root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - mmdeploy.apis.pytorch2onnx.torch2onnx with Call id: 0 failed. exit. ` The ”co_dino_5scale_swin_large_16e_o365tococo.pth“is in https://github.com/Sense-X/Co-DETR/tree/main : image

Could you please tell me which checkpoint is you use?And I find someone use ”model_checkpoint = 'checkpoints/co_dino_5scale_r50_lsj_8xb2_1x_coco-69a72d67.pth'“ in https://github.com/open-mmlab/mmdetection/issues/11011 It is also return the same error! So how did you succeed export CO-DETR to onnx using mmdeploy without SoftNonMaxSuppression.I would be very grateful if you could help me!

xinlin-xiao avatar Nov 16 '23 09:11 xinlin-xiao

Just remove the soft_nms from the config, and you can add it later

MarouaneMja avatar Nov 20 '23 14:11 MarouaneMja

Just remove the soft_nms from the config, and you can add it later

Do you have any suggestions on how to fix this mistake? RuntimeError:Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DetDataSample

xinlin-xiao avatar Nov 20 '23 15:11 xinlin-xiao

Yes, DetDataSample are not supported by JIT , you have to convert the final output to a tuple format. Try, to run a simple Detr to get an idea what the output looks like , then do the same

MarouaneMja avatar Nov 21 '23 09:11 MarouaneMja

@MarouaneMja

Yes, DetDataSample are not supported by JIT , you have to convert the final output to a tuple format. Try, to run a simple Detr to get an idea what the output looks like , then do the same

I am trying find the model output is : `out:[<DetDataSample(

META INFORMATION
img_path: '/root/workspace/mmdetection-support_dino_onnx/demo/demo.jpg'
scale_factor: (2.9984375, 2.997658079625293)
img_shape: (1280, 1919)
batch_input_shape: (1280, 1919)
ori_shape: (427, 640)
img_id: 0
pad_shape: (1280, 1919)

DATA FIELDS
pred_instances: <InstanceData(
    
        META INFORMATION
    
        DATA FIELDS
        scores: tensor([0.1154, 0.1098, 0.1047, 0.0994, 0.0982, 0.0867, 0.0844, 0.0714, 0.0704,
                    0.0603, 0.0577, 0.0564, 0.0543, 0.0538, 0.0520, 0.0511, 0.0509, 0.0509,
                    0.0502, 0.0498, 0.0495, 0.0491, 0.0461, 0.0460, 0.0440, 0.0434, 0.0432,
                    0.0425, 0.0422, 0.0419, 0.0409, 0.0407, 0.0407, 0.0407, 0.0391, 0.0390,
                    0.0381, 0.0375, 0.0372, 0.0372, 0.0369, 0.0367, 0.0367, 0.0367, 0.0362,
                    0.0358, 0.0355, 0.0350, 0.0349, 0.0349, 0.0347, 0.0347, 0.0345, 0.0342,
                    0.0341, 0.0340, 0.0340, 0.0339, 0.0339, 0.0338, 0.0332, 0.0332, 0.0331,
                    0.0330, 0.0327, 0.0326, 0.0326, 0.0326, 0.0325, 0.0324, 0.0322, 0.0322,
                    0.0318, 0.0317, 0.0316, 0.0315, 0.0314, 0.0312, 0.0312, 0.0310, 0.0305,
                    0.0304, 0.0304, 0.0304, 0.0299, 0.0299, 0.0299, 0.0298, 0.0298, 0.0297,
                    0.0296, 0.0295, 0.0293, 0.0291, 0.0290, 0.0290, 0.0290, 0.0290, 0.0289,
                    0.0287, 0.0285, 0.0284, 0.0283, 0.0282, 0.0281, 0.0281, 0.0281, 0.0280,
                    0.0280, 0.0280, 0.0279, 0.0279, 0.0277, 0.0277, 0.0276, 0.0275, 0.0274,
                    0.0274, 0.0272, 0.0271, 0.0271, 0.0269, 0.0266, 0.0266, 0.0265, 0.0265,
                    0.0263, 0.0263, 0.0262, 0.0262, 0.0260, 0.0260, 0.0260, 0.0260, 0.0260,
                    0.0259, 0.0258, 0.0256, 0.0256, 0.0255, 0.0255, 0.0255, 0.0254, 0.0253,
                    0.0251, 0.0250, 0.0248, 0.0246, 0.0246, 0.0246, 0.0246, 0.0246, 0.0244,
                    0.0244, 0.0244, 0.0244, 0.0244, 0.0243, 0.0243, 0.0243, 0.0243, 0.0242,
                    0.0241, 0.0236, 0.0235, 0.0235, 0.0234, 0.0234, 0.0234, 0.0233, 0.0233,
                    0.0233, 0.0232, 0.0232, 0.0231, 0.0231, 0.0231, 0.0230, 0.0230, 0.0230,
                    0.0230, 0.0230, 0.0228, 0.0228, 0.0227, 0.0226, 0.0226, 0.0225, 0.0225,
                    0.0225, 0.0223, 0.0222, 0.0221, 0.0220, 0.0220, 0.0219, 0.0219, 0.0218,
                    0.0218, 0.0216, 0.0216, 0.0216, 0.0215, 0.0214, 0.0214, 0.0214, 0.0213,
                    0.0213, 0.0213, 0.0213, 0.0213, 0.0212, 0.0212, 0.0212, 0.0212, 0.0211,
                    0.0209, 0.0209, 0.0208, 0.0208, 0.0208, 0.0208, 0.0208, 0.0208, 0.0207,
                    0.0207, 0.0207, 0.0207, 0.0207, 0.0206, 0.0205, 0.0205, 0.0205, 0.0205,
                    0.0204, 0.0204, 0.0068, 0.0067, 0.0056, 0.0051, 0.0048, 0.0048, 0.0046,
                    0.0041, 0.0040, 0.0037, 0.0036, 0.0033, 0.0030, 0.0030, 0.0028, 0.0028,
                    0.0027, 0.0022, 0.0021, 0.0016, 0.0015, 0.0014, 0.0014, 0.0013, 0.0013,
                    0.0012, 0.0012, 0.0011, 0.0010], device='cuda:0')
        bboxes: tensor([[575.6778, 238.4051, 640.0000, 377.5103],
                    [  6.4190,   1.7029, 638.2518, 377.3401],
                    [402.1796, 282.1749, 640.0000, 426.7463],
                    ...,
                    [  0.9140,   0.0000, 640.0000, 249.7452],
                    [  0.0000, 206.1276, 422.8578, 427.0000],
                    [  0.9140,   0.0000, 640.0000, 249.7452]], device='cuda:0')
        labels: tensor([39,  0,  0, 34,  0,  0, 69, 60, 41, 72, 79, 39, 71, 34, 39, 60, 60, 39,
                    39, 34,  0,  0,  0, 56, 39, 41, 43, 71, 34, 44, 39, 34, 69,  0,  0, 39,
                    41, 17, 39, 76,  0,  0, 34, 39, 79, 79,  0, 72, 12, 39, 39, 34, 39, 79,
                    44, 39, 39, 44,  0, 39, 41, 41, 34, 44, 45,  0, 78, 39, 79, 60, 34,  0,
                    41, 32, 39, 69, 39, 71,  0, 79,  0, 39,  0, 44, 55,  0, 56, 39, 34, 78,
                    71,  4, 44, 38, 34, 72, 43, 60, 72,  0, 12, 34, 39, 39, 45,  0,  0, 62,
                    61,  0,  0, 61, 34, 34, 69, 39, 72, 79,  0, 34,  0, 78, 67,  0, 72, 34,
                    41, 39,  0, 70,  0, 71, 40,  0, 41, 44, 60,  0, 79,  0, 39,  0, 38, 34,
                     0, 71, 79, 39,  0,  0,  0, 29, 45,  0, 53, 34, 12,  0, 60, 67, 79, 34,
                     0,  0, 78, 39, 34, 45, 43, 40, 44,  6, 62, 79,  0, 78,  0,  0, 67, 71,
                    68,  0, 59, 32, 41,  0,  0, 39, 44, 12, 56, 44, 39, 41, 39, 39, 12, 56,
                    34, 69, 71, 40, 15, 38, 79, 60, 44, 72, 79, 44, 56, 44,  0, 40, 12, 72,
                    39, 60, 39,  0, 79,  0,  0, 44, 79, 72, 38, 79, 40, 41, 34, 13, 43, 79,
                    41, 79, 34, 60, 39,  0,  0,  0, 39, 44,  0, 34, 39, 39,  0, 39,  0,  0,
                    39, 34, 69,  0, 34, 41, 60,  0, 79, 44, 71, 60, 17], device='cuda:0')
    ) at 0x7fb8680766d0>
ignored_instances: <InstanceData(
    
        META INFORMATION
    
        DATA FIELDS
    ) at 0x7fb82c7c1820>
gt_instances: <InstanceData(
    
        META INFORMATION
    
        DATA FIELDS
    ) at 0x7fb82c7c1850>

) at 0x7fb82c7c18e0>]

But,I do not know the right output looks like,trying use torch.jit.script to keep only labels, bboxes, tensor, scores in pred_instances is also return errors. if you succeed export CO-DETR to onnx or TorchScript,could you please share your modified code and the right output to help me fix the errors? I would be very grateful if you could help me! This problem has been bothering me for a long time.

xinlin-xiao avatar Nov 22 '23 11:11 xinlin-xiao

Just remove the soft_nms from the config, and you can add it later

hello,the config file,I don’t find the soft_nms

lzxpaipai avatar Jan 02 '24 11:01 lzxpaipai

Hi! I figured out to convert co-dino to onnx and would like to share with you guys. I am using the model trained using mmdet:v3.3.0 and using mmdeploy:v1.3.1, onnxruntime:1.16.3. I found that the model trained using co-detr official repo (mmdet-2.25) requires lot of tinkering as the model backbone(swintransformer-v1 vs v2) is little different and also preprocessing functions and some utilities functions required for inference are different. So, if you train a new model from mmdet:v3.3.0 repo, I think you will be able to export it to onnx.

To solve this issue:

RuntimeError:Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DetDataSample

I modified ...(your-env-path)/site-packages/torch/jit/trace.py In line 125, I added try except as a temporary fix. I am using the tuple format so that the code doesn't break later during visualization. I am following the same format as the plain dino export to onnx. The change is like this:

try:
    result_ = self.inner(*trace_inputs)
    result = result_[0]
    scores, bboxes, labels = result.pred_instances.scores, result.pred_instances.bboxes, result.pred_instances.labels
    
    # Combining scores and bboxes into the dino format
    combined_tensor = torch.cat((bboxes, scores.unsqueeze(1)), dim=1)
    formatted_labels = labels.unsqueeze(0).long()  # Retaining integer format

    # Creating tuples with combined tensors and labels
    formatted_result = (combined_tensor.unsqueeze(0), formatted_labels) 
    outs.append(formatted_result)
except:
    print(self.inner(*trace_inputs))
    outs.append(self.inner(*trace_inputs))

Also, we need to remove soft_nms operations, just comment this section out like below in your config file:

   ...
  test_cfg=[
        dict(
            max_per_img=300, 
            # nms=dict(iou_threshold=0.8, type='soft_nms')
            ),
  ...

Hope this helps! Thank you! :)

bibekyess avatar Jan 10 '24 02:01 bibekyess

Hi! I figured out to convert co-dino to onnx and would like to share with you guys. I am using the model trained using mmdet:v3.3.0 and using mmdeploy:v1.3.1, onnxruntime:1.16.3. I found that the model trained using co-detr official repo (mmdet-2.25) requires lot of tinkering as the model backbone(swintransformer-v1 vs v2) is little different and also preprocessing functions and some utilities functions required for inference are different. So, if you train a new model from mmdet:v3.3.0 repo, I think you will be able to export it to onnx.

To solve this issue:

RuntimeError:Only tuples, lists and Variables are supported as JIT inputs/outputs. Dictionaries and strings are also accepted, but their usage is not recommended. Here, received an input of unsupported type: DetDataSample

I modified ...(your-env-path)/site-packages/torch/jit/trace.py In line 125, I added try except as a temporary fix. I am using the tuple format so that the code doesn't break later during visualization. I am following the same format as the plain dino export to onnx. The change is like this:

try:
    result_ = self.inner(*trace_inputs)
    result = result_[0]
    scores, bboxes, labels = result.pred_instances.scores, result.pred_instances.bboxes, result.pred_instances.labels
    
    # Combining scores and bboxes into the dino format
    combined_tensor = torch.cat((bboxes, scores.unsqueeze(1)), dim=1)
    formatted_labels = labels.unsqueeze(0).long()  # Retaining integer format

    # Creating tuples with combined tensors and labels
    formatted_result = (combined_tensor.unsqueeze(0), formatted_labels) 
    outs.append(formatted_result)
except:
    print(self.inner(*trace_inputs))
    outs.append(self.inner(*trace_inputs))

Also, we need to remove soft_nms operations, just comment this section out like below in your config file:

   ...
  test_cfg=[
        dict(
            max_per_img=300, 
            # nms=dict(iou_threshold=0.8, type='soft_nms')
            ),
  ...

Hope this helps! Thank you! :)

cool, I will try

chenzx2 avatar Jan 10 '24 02:01 chenzx2

try:
    result_ = self.inner(*trace_inputs)
    result = result_[0]
    scores, bboxes, labels = result.pred_instances.scores, result.pred_instances.bboxes, result.pred_instances.labels
    
    # Combining scores and bboxes into the dino format
    combined_tensor = torch.cat((bboxes, scores.unsqueeze(1)), dim=1)
    formatted_labels = labels.unsqueeze(0).long()  # Retaining integer format

    # Creating tuples with combined tensors and labels
    formatted_result = (combined_tensor.unsqueeze(0), formatted_labels) 
    outs.append(formatted_result)
except:
    print(self.inner(*trace_inputs))
    outs.append(self.inner(*trace_inputs))

is pytoch2torchscript can modified ...(your-env-path)/site-packages/torch/jit/trace.py to slove `RuntimeError: Tracer cannot infer type of [<DetDataSample(

META INFORMATION
scale_factor: (2.9984375, 2.997658079625293)
img_id: 0
ori_shape: (427, 640)
batch_input_shape: (1280, 1919)
img_shape: (1280, 1919)
img_path: '/mnt/data/train-yolov7-images/pytorch2onn-mmdeploy/mmdetection-support_dino_onnx/mmdetection-support_dino_onnx/demo/demo.jpg'
pad_shape: (1280, 1919)

DATA FIELDS
ignored_instances: <InstanceData(
    
        META INFORMATION
    
        DATA FIELDS
    ) at 0x7f2ef4d998b0>
pred_instances: <InstanceData(
    
        META INFORMATION
    
        DATA FIELDS
        labels: tensor([ 0,  0,  0,  0,  0, 25,  0,  0,  0,  0, 16,  2,  2, 25,  2, 25, 25,  0,
                    16, 29,  0, 14,  2,  0,  2, 16,  0,  0, 16, 16, 10, 25, 25, 14, 25,  2,
                     0, 25,  2,  2, 37, 58, 25, 14, 10, 14,  0,  2, 58, 26, 10, 25,  2, 25,
                     2,  2, 58, 25, 10, 25, 58,  3,  2, 33,  0, 25, 58, 58,  0,  0,  0,  9,
                     0, 56, 26, 14,  0, 25,  0,  2, 25,  2, 17, 25, 25, 25, 25, 11, 25,  0,
                     0, 58,  0,  2, 14, 58,  0, 25, 25, 29,  2, 25,  2, 58,  2, 40,  2,  3,
                    26,  3,  9, 38, 25,  9, 25,  3, 33,  0,  0, 17, 55, 11, 10, 58, 58, 16,
                    16, 16,  2, 56, 36, 26, 58, 17, 17, 56,  0, 35,  0, 12, 12, 25,  2,  2,
                    25, 77,  8,  9, 46, 36,  4, 36, 25,  2, 26,  2,  9,  0,  0, 15, 25,  0,
                    25,  2, 14, 33,  0,  0, 10,  8, 60, 33, 25, 58, 13, 26,  0,  0,  0,  0,
                     0,  0, 25,  2, 25,  2, 58,  0,  2, 25, 25, 14,  2, 14, 16, 25, 25, 26,
                     0,  0,  0,  2,  0,  9, 58,  3, 26, 25, 17, 58,  2,  2, 56, 25, 14,  2,
                     0, 25, 16, 10, 25,  2, 58, 16, 25, 25, 58, 25,  2,  0,  0,  2,  0, 25,
                     2,  0,  0])
        scores: tensor([0.2160, 0.1991, 0.1698, 0.1452, 0.1358, 0.1283, 0.0971, 0.0791, 0.0758,
                    0.0723, 0.0697, 0.0694, 0.0666, 0.0625, 0.0615, 0.0583, 0.0566, 0.0557,
                    0.0551, 0.0535, 0.0527, 0.0525, 0.0512, 0.0511, 0.0509, 0.0474, 0.0474,
                    0.0470, 0.0468, 0.0457, 0.0454, 0.0446, 0.0443, 0.0439, 0.0435, 0.0432,
                    0.0431, 0.0429, 0.0428, 0.0419, 0.0400, 0.0399, 0.0398, 0.0397, 0.0396,
                    0.0393, 0.0390, 0.0386, 0.0383, 0.0379, 0.0377, 0.0376, 0.0373, 0.0369,
                    0.0359, 0.0354, 0.0351, 0.0351, 0.0347, 0.0342, 0.0341, 0.0340, 0.0337,
                    0.0336, 0.0335, 0.0334, 0.0333, 0.0331, 0.0331, 0.0323, 0.0323, 0.0320,
                    0.0317, 0.0312, 0.0312, 0.0310, 0.0309, 0.0305, 0.0302, 0.0301, 0.0300,
                    0.0299, 0.0299, 0.0297, 0.0297, 0.0296, 0.0294, 0.0292, 0.0292, 0.0291,
                    0.0290, 0.0290, 0.0289, 0.0287, 0.0287, 0.0286, 0.0286, 0.0285, 0.0283,
                    0.0282, 0.0282, 0.0282, 0.0281, 0.0281, 0.0280, 0.0278, 0.0276, 0.0276,
                    0.0276, 0.0275, 0.0271, 0.0270, 0.0269, 0.0269, 0.0269, 0.0268, 0.0267,
                    0.0266, 0.0266, 0.0263, 0.0263, 0.0262, 0.0258, 0.0258, 0.0257, 0.0256,
                    0.0255, 0.0254, 0.0253, 0.0249, 0.0249, 0.0248, 0.0247, 0.0246, 0.0244,
                    0.0243, 0.0242, 0.0242, 0.0241, 0.0240, 0.0240, 0.0239, 0.0239, 0.0237,
                    0.0237, 0.0235, 0.0235, 0.0235, 0.0235, 0.0232, 0.0232, 0.0229, 0.0229,
                    0.0229, 0.0228, 0.0228, 0.0227, 0.0227, 0.0226, 0.0224, 0.0224, 0.0223,
                    0.0222, 0.0222, 0.0222, 0.0221, 0.0220, 0.0220, 0.0219, 0.0219, 0.0219,
                    0.0218, 0.0218, 0.0218, 0.0218, 0.0217, 0.0217, 0.0194, 0.0167, 0.0114,
                    0.0099, 0.0099, 0.0088, 0.0084, 0.0072, 0.0069, 0.0064, 0.0060, 0.0056,
                    0.0055, 0.0052, 0.0052, 0.0051, 0.0049, 0.0049, 0.0048, 0.0044, 0.0043,
                    0.0043, 0.0042, 0.0042, 0.0041, 0.0039, 0.0039, 0.0038, 0.0037, 0.0037,
                    0.0037, 0.0036, 0.0035, 0.0035, 0.0034, 0.0034, 0.0032, 0.0032, 0.0031,
                    0.0031, 0.0030, 0.0029, 0.0029, 0.0028, 0.0028, 0.0028, 0.0026, 0.0025,
                    0.0025, 0.0023, 0.0023, 0.0023, 0.0021, 0.0020, 0.0016, 0.0016, 0.0015,
                    0.0013, 0.0011, 0.0010])
        bboxes: tensor([[1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [0.0000e+00, 7.2158e+01, 3.8210e+02, 4.2700e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 6.0318e+01, 2.8487e+02, 4.2700e+02],
                    [2.6185e+02, 7.1847e+01, 6.4000e+02, 4.2432e+02],
                    [2.9458e+02, 1.4595e+02, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 5.4273e+01, 2.9559e+02, 3.2202e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [2.8250e+02, 4.3829e+01, 6.3907e+02, 4.2441e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.4563e+02, 1.4659e+01, 6.3662e+02, 3.9578e+02],
                    [4.7329e+00, 4.1630e+01, 6.3977e+02, 3.9011e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [2.1382e+02, 1.6523e+02, 6.4000e+02, 4.2700e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [2.1120e+02, 1.2138e+02, 6.3890e+02, 3.8925e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [2.4776e+02, 1.5199e+01, 6.4000e+02, 3.9079e+02],
                    [0.0000e+00, 4.1204e+01, 3.8788e+02, 3.6426e+02],
                    [4.6684e+02, 7.1337e+01, 6.4000e+02, 4.2491e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [0.0000e+00, 1.6564e+02, 5.1611e+02, 4.2700e+02],
                    [0.0000e+00, 1.5713e+02, 3.6014e+02, 4.2700e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 8.2242e-01, 3.9962e+02, 3.2408e+02],
                    [0.0000e+00, 5.4273e+01, 2.9559e+02, 3.2202e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [3.2242e+02, 7.0608e+01, 6.4000e+02, 2.5587e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [1.0147e+01, 1.5368e+02, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 4.1166e+01, 4.4898e+02, 3.7798e+02],
                    [5.6229e+02, 4.6781e+01, 6.4000e+02, 4.2581e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 6.9887e+00, 2.8779e+02, 4.1326e+02],
                    [0.0000e+00, 3.9866e+01, 2.5395e+02, 3.0416e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [0.0000e+00, 5.4554e+01, 5.3191e+02, 4.1821e+02],
                    [0.0000e+00, 6.9887e+00, 2.8779e+02, 4.1326e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [1.6044e+02, 5.3330e+01, 5.0755e+02, 3.8331e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [2.8250e+02, 4.3829e+01, 6.3907e+02, 4.2441e+02],
                    [2.5501e+01, 6.2984e+00, 6.4000e+02, 3.8557e+02],
                    [0.0000e+00, 8.2242e-01, 3.9962e+02, 3.2408e+02],
                    [0.0000e+00, 5.4273e+01, 2.9559e+02, 3.2202e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 5.4554e+01, 5.3191e+02, 4.1821e+02],
                    [0.0000e+00, 7.2158e+01, 3.8210e+02, 4.2700e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [2.1382e+02, 1.6523e+02, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 1.3510e+02, 3.0765e+02, 4.1102e+02],
                    [1.5868e+02, 2.7586e+01, 6.4000e+02, 3.8983e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [4.2014e+02, 1.6466e+02, 6.4000e+02, 3.7918e+02],
                    [3.8734e+02, 1.7227e+02, 6.4000e+02, 4.2472e+02],
                    [0.0000e+00, 3.9866e+01, 2.5395e+02, 3.0416e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [4.4463e-01, 0.0000e+00, 1.0380e+02, 4.2604e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [1.6612e+00, 1.8108e+02, 1.3982e+02, 4.2700e+02],
                    [0.0000e+00, 1.2050e+00, 3.0532e+02, 2.0350e+02],
                    [0.0000e+00, 1.5517e+02, 2.3322e+02, 4.2700e+02],
                    [3.8747e+00, 0.0000e+00, 6.4000e+02, 2.2986e+02],
                    [1.6039e+02, 7.8185e+01, 4.4346e+02, 2.8939e+02],
                    [0.0000e+00, 5.4273e+01, 2.9559e+02, 3.2202e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 6.9887e+00, 2.8779e+02, 4.1326e+02],
                    [7.7882e+01, 1.5648e+01, 4.9433e+02, 3.7667e+02],
                    [3.8785e+02, 8.7845e+01, 5.7135e+02, 2.0960e+02],
                    [0.0000e+00, 6.2847e-01, 2.2725e+02, 1.3132e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 0.0000e+00, 5.2475e+02, 1.3465e+02],
                    [2.0440e+02, 3.4097e+01, 6.4000e+02, 3.8894e+02],
                    [5.7069e+02, 1.0939e+02, 5.8223e+02, 1.1748e+02],
                    [0.0000e+00, 4.1204e+01, 3.8788e+02, 3.6426e+02],
                    [0.0000e+00, 8.3078e+00, 2.6751e+02, 3.6747e+02],
                    [1.5868e+02, 2.7586e+01, 6.4000e+02, 3.8983e+02],
                    [2.6185e+02, 7.1847e+01, 6.4000e+02, 4.2432e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [3.5561e+00, 0.0000e+00, 6.4000e+02, 2.4098e+02],
                    [1.0667e+02, 6.9062e+01, 4.2385e+02, 3.6903e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [6.2002e+01, 0.0000e+00, 6.2688e+02, 1.4552e+02],
                    [4.1718e+00, 4.1516e-01, 1.7336e+02, 1.3076e+02],
                    [0.0000e+00, 0.0000e+00, 5.5502e+02, 1.4696e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [4.4757e+02, 2.5681e+01, 6.4000e+02, 4.2443e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [3.3409e+02, 6.9283e+01, 5.2455e+02, 3.9823e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [2.3454e+02, 8.3355e+01, 6.4000e+02, 3.8414e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [1.7801e+02, 2.0820e+02, 6.4000e+02, 4.2700e+02],
                    [4.0794e+02, 5.2874e+01, 6.4000e+02, 3.9537e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [4.4463e-01, 0.0000e+00, 1.0380e+02, 4.2604e+02],
                    [0.0000e+00, 8.7248e+01, 2.9231e+02, 4.2453e+02],
                    [2.6185e+02, 7.1847e+01, 6.4000e+02, 4.2432e+02],
                    [5.7069e+02, 1.0939e+02, 5.8223e+02, 1.1748e+02],
                    [2.1382e+02, 1.6523e+02, 6.4000e+02, 4.2700e+02],
                    [5.8778e+02, 1.0198e+01, 6.4000e+02, 4.0520e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [0.0000e+00, 3.9866e+01, 2.5395e+02, 3.0416e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [2.8250e+02, 4.3829e+01, 6.3907e+02, 4.2441e+02],
                    [5.8725e+02, 8.0794e+01, 5.9527e+02, 8.9090e+01],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [5.6229e+02, 4.6781e+01, 6.4000e+02, 4.2581e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.9076e+02, 8.4498e+01, 6.3985e+02, 4.1969e+02],
                    [4.4463e-01, 0.0000e+00, 1.0380e+02, 4.2604e+02],
                    [0.0000e+00, 7.2158e+01, 3.8210e+02, 4.2700e+02],
                    [2.9458e+02, 1.4595e+02, 6.4000e+02, 4.2700e+02],
                    [5.8774e+01, 2.0932e+00, 2.9747e+02, 3.0555e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [7.4486e+01, 1.8788e+01, 3.0107e+02, 3.5681e+02],
                    [0.0000e+00, 4.1204e+01, 3.8788e+02, 3.6426e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [4.2666e+02, 1.7290e+02, 6.4000e+02, 4.2700e+02],
                    [5.6229e+02, 4.6781e+01, 6.4000e+02, 4.2581e+02],
                    [0.0000e+00, 4.0296e+01, 4.7342e+02, 3.8573e+02],
                    [2.4282e+00, 2.4388e+00, 6.3059e+01, 4.2700e+02],
                    [1.6500e+02, 7.7726e+01, 6.4000e+02, 4.2700e+02],
                    [3.8403e+02, 3.8851e+00, 6.3406e+02, 1.7161e+02],
                    [0.0000e+00, 8.2242e-01, 3.9962e+02, 3.2408e+02],
                    [3.0750e+02, 7.8088e+01, 4.9292e+02, 4.0430e+02],
                    [2.8905e+00, 4.7422e-01, 4.8275e+02, 9.4585e+01],
                    [5.7069e+02, 1.0939e+02, 5.8223e+02, 1.1748e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [5.8047e+02, 8.1446e+01, 5.9600e+02, 9.0194e+01],
                    [0.0000e+00, 2.8271e+02, 4.8164e+02, 4.2694e+02],
                    [0.0000e+00, 6.0318e+01, 2.8487e+02, 4.2700e+02],
                    [0.0000e+00, 7.3228e+01, 2.9850e+02, 2.0027e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [2.4165e+02, 1.6792e+01, 6.4000e+02, 3.9862e+02],
                    [0.0000e+00, 8.2242e-01, 3.9962e+02, 3.2408e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [0.0000e+00, 5.4273e+01, 2.9559e+02, 3.2202e+02],
                    [0.0000e+00, 1.1854e+02, 1.3432e+02, 3.3936e+02],
                    [4.7329e+00, 4.1630e+01, 6.3977e+02, 3.9011e+02],
                    [3.9284e+02, 9.1945e+01, 6.4000e+02, 3.8028e+02],
                    [0.0000e+00, 7.4748e+01, 4.2332e+02, 4.2700e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [0.0000e+00, 6.9589e+01, 2.9147e+02, 3.7202e+02],
                    [1.0793e+00, 7.7588e+01, 6.4000e+02, 4.2700e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [0.0000e+00, 4.1204e+01, 3.8788e+02, 3.6426e+02],
                    [2.8507e+02, 1.1117e+01, 6.4000e+02, 3.7584e+02],
                    [4.7329e+00, 4.1630e+01, 6.3977e+02, 3.9011e+02],
                    [0.0000e+00, 1.8234e+02, 4.6567e+02, 4.2700e+02],
                    [3.9010e+02, 1.0139e+01, 6.4000e+02, 4.0455e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [0.0000e+00, 8.8411e-01, 2.8525e+02, 3.2094e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [2.6185e+02, 7.1847e+01, 6.4000e+02, 4.2432e+02],
                    [4.7329e+00, 4.1630e+01, 6.3977e+02, 3.9011e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [0.0000e+00, 2.4596e+00, 3.6105e+02, 2.9753e+02],
                    [1.5847e+02, 6.8899e+01, 4.7434e+02, 3.8014e+02],
                    [4.7329e+00, 4.1630e+01, 6.3977e+02, 3.9011e+02],
                    [4.3726e+02, 1.6330e+02, 6.4000e+02, 3.6836e+02],
                    [4.1968e+02, 9.7537e+01, 6.4000e+02, 4.2626e+02],
                    [2.8250e+02, 4.3829e+01, 6.3907e+02, 4.2441e+02],
                    [0.0000e+00, 2.4596e+00, 3.6105e+02, 2.9753e+02],
                    [4.2666e+02, 1.7290e+02, 6.4000e+02, 4.2700e+02],
                    [3.3403e+02, 7.9988e+01, 6.4000e+02, 4.2429e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [2.5501e+01, 6.2984e+00, 6.4000e+02, 3.8557e+02],
                    [3.2040e+02, 6.6847e+01, 6.4000e+02, 3.8979e+02],
                    [1.6356e+00, 5.5374e+01, 3.0322e+02, 3.5976e+02],
                    [5.5122e+02, 4.9658e+01, 6.4000e+02, 4.1820e+02],
                    [1.0730e+02, 0.0000e+00, 6.3086e+02, 1.3961e+02],
                    [3.3831e+02, 7.2650e+01, 6.4000e+02, 4.2263e+02],
                    [3.9284e+02, 9.1945e+01, 6.4000e+02, 3.8028e+02],
                    [2.8250e+02, 4.3829e+01, 6.3907e+02, 4.2441e+02],
                    [0.0000e+00, 0.0000e+00, 5.2475e+02, 1.3465e+02],
                    [1.4443e+02, 7.2818e+01, 6.4000e+02, 4.2557e+02],
                    [0.0000e+00, 0.0000e+00, 5.5502e+02, 1.4696e+02],
                    [2.8250e+02, 4.3829e+01, 6.3907e+02, 4.2441e+02],
                    [2.1494e+02, 7.4293e+01, 6.4000e+02, 4.2397e+02],
                    [2.4880e-01, 7.6363e-01, 2.4526e+02, 1.3574e+02],
                    [5.8323e+02, 9.6534e+00, 6.4000e+02, 4.2157e+02],
                    [2.1494e+02, 7.4293e+01, 6.4000e+02, 4.2397e+02],
                    [2.5619e+02, 1.6856e+02, 6.4000e+02, 4.2599e+02],
                    [0.0000e+00, 4.0296e+01, 4.7342e+02, 3.8573e+02],
                    [1.7174e+02, 8.9912e+01, 4.4728e+02, 2.8728e+02],
                    [0.0000e+00, 7.4748e+01, 4.2332e+02, 4.2700e+02],
                    [4.6382e+00, 0.0000e+00, 6.4000e+02, 2.1783e+02],
                    [0.0000e+00, 7.4748e+01, 4.2332e+02, 4.2700e+02],
                    [2.3454e+02, 8.3355e+01, 6.4000e+02, 3.8414e+02],
                    [0.0000e+00, 7.5742e+01, 3.5057e+02, 4.2451e+02],
                    [1.4443e+02, 7.2818e+01, 6.4000e+02, 4.2557e+02],
                    [0.0000e+00, 6.9887e+00, 2.8779e+02, 4.1326e+02],
                    [1.4443e+02, 7.2818e+01, 6.4000e+02, 4.2557e+02],
                    [4.6382e+00, 0.0000e+00, 6.4000e+02, 2.1783e+02],
                    [0.0000e+00, 8.8411e-01, 2.8525e+02, 3.2094e+02],
                    [1.6356e+00, 5.5374e+01, 3.0322e+02, 3.5976e+02]])
    ) at 0x7f2e5428f8e0>
gt_instances: <InstanceData(
    
        META INFORMATION
    
        DATA FIELDS
    ) at 0x7f2ef4d99880>

) at 0x7f2ef4d99970>] :Could not infer type of list element: Only tensors and (possibly nested) tuples of tensors, lists, or dictsare supported as inputs or outputs of traced functions, but instead got value of type DetDataSample. `

xinlin-xiao avatar Jan 12 '24 07:01 xinlin-xiao

@xinlin-xiao Mm it seems it is returning in list format, ao just get the entry at 0th index of that list and it should solve your issue

bibekyess avatar Jan 13 '24 04:01 bibekyess

Hi @bibekyess, could you please provide the entire _trace.py file? mine seems to look different from yours, as I can't add anything to line 125 (that line for me is in the middle of a function call).

Moreover, I'm using the implementation that's included in mmdetection v3.3.0 and I can confirm that issue is present also here.

sirolf-otrebla avatar Feb 15 '24 13:02 sirolf-otrebla

Hi @sirolf-otrebla I am now on vacation so I will provide it later. But as far as I remember, line 125 includes this instruction outs.append(self.inner(*trace_inputs)) Now just substitute that with the above try-except snippet. Or you can just search for that instruction in this file and substitute.

bibekyess avatar Feb 15 '24 23:02 bibekyess