Swin-Transformer
Swin-Transformer copied to clipboard
[help]How to export swin model to ONNX? Problem: Node (Concat_246) Op (Concat) [ShapeInferenceError]
I exported my trained model into ONNX by the following code:
torch.onnx.export(model, input_tensor, onnx_name, verbose=True, opset_version=12, input_names=['images'],
output_names=['output'], use_external_data_format=False)
But when running onnx model, I got the following error:
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Node (Concat_246) Op (Concat) [ShapeInferenceError] All inputs to Concat must have same rank
It caused by attn.view(B_ // nW, nW, self.num_heads, N, N)
in https://github.com/microsoft/Swin-Transformer/blob/793f971e735b1e27d5e2c683b7a2b53090d3806d/models/swin_transformer.py#L133
The Concat op maybe one parts of torch.view(). Does anyone know how to solve this problem?
how?
i got same problem, can you help me?
i got same problem, can you help me?
change "attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)" to "attn = attn.view(-1, self.num_heads, N, N) + mask.unsqueeze(1)" can solve the problem.
only change this line?
only change this line?
yes
Thanks @xizi
For exporting the video swin transformer to onnx (opcode=9), following code works and compatible with TensorRT 7.
https://gist.github.com/devymex/51687edd41eef4ccc56d76a0c66bf92c
P.s. The default config is for Swin-T and its pretrained model can be found here:
https://github.com/SwinTransformer/Video-Swin-Transformer
The link https://gist.github.com/devymex/51687edd41eef4ccc56d76a0c66bf92c is not avalilable,can you share the code how to export video swin transformer to onnx? thanks.@devymex