Export dynamic batch size ONNX using ONNX's DeformConv
This PR replaces the usage of deform_conv2d_onnx_exporter with the native DeformConv operator available in ONNX opset 19. The exported ONNX model now supports dynamic batch sizes.
Notes
- The
symbolic_deform_conv_19()function was generated using OpenAI o1. It works in my testing, but let me know if there are any special requirements to consider. - Resolves #127.
Thanks a lot! I'll take time to look at it tomorrow, which really helps. BTW, could you update the PR with no output and minimal modification in the notebook? That would be very nice for me to read and test the updated part in a clear way.
Sure, I've updated the notebook to reduce the modifications.
Thank you so much, @itskyf, for your contribution! Have you had a chance to test whether the execution works with ONNX Runtime?
Hi @ZhengPeng7,
I believe the issue arises because ONNX has implemented the DeformConv operator, but unfortunately, ONNX Runtime does not currently support it. As a result, any code that includes this operator cannot be executed within a Runtime Session. :/
@ZhengPeng7 ah, I forgot to mention that we need to also update the onnx package for opset 19. @alfausa1 I faced the same problem. The dynamic batched model can only be used after TensorRT conversion. But since DefirmConv is an ONNX operator, I hope it will be supported in ONNXRuntime soon.
@itskyf Could you please provide the code in which you have converted the dynamic batched model to TensorRT? Thanks in advance!
Thanks for @itskyf 's PR. This is exactly what I tested, and it worked.
I have a question about this PR for @itskyf
When I tested in this way, I found the result trt engine will work as expected when the batch size used when generating is different from batch size used for inferencing. And I figured out https://github.com/ZhengPeng7/BiRefNet/pull/166 this change should be made. Do you find the same issue ?
@ZhengPeng7 ah, I forgot to mention that we need to also update the onnx package for opset 19. @alfausa1 I faced the same problem. The dynamic batched model can only be used after TensorRT conversion. But since DefirmConv is an ONNX operator, I hope it will be supported in ONNXRuntime soon.
Hi, @itskyf , sorry for the late reply, just came back from the Lunar New Year holiday :)
I've upgraded the related packages as onnx==1.17.0, onnxruntime-gpu==1.20.1, onnxscript==0.1.0, which should be all the latest versions? But I still got this error (NotImplemented: [ONNXRuntimeError] : 9 : NOT_IMPLEMENTED : Could not find an implementation for DeformConv(19) node with name '/squeeze_module/squeeze_module.0/dec_att/aspp1/atrous_conv/DeformConv'):
As you said above, the DCN is still not supported in ONNXRuntime. If so, how to use the exported birefnet.onnx file?
Thanks for your kind explanation in advance!
Hi @ZhengPeng7, I might be able to help.
To export with opset >19, you’ll need to update your PyTorch version to >2.4. In the provided example, it uses opset 19 for the converter and opset 20 for the entire model.
Regarding execution, you can’t run an .onnx file directly with onnxruntime by default, because the operator is not implemented yet. I believe @jhwei is referring to converting the .onnx model to a TensorRT engine for execution.
That said, I’m not sure, but maybe you can run it natively with onnxruntime if you specify the TensorRT execution provider.
Hi, @alfausa1. Thanks a lot for the details :)
Yeah, currently, the suggested and default PyTorch version used in BiRefNet is 2.5.1, which should be good here.
So, it seems that if we want to use onnxruntime's session to run it, we can only use the previously employed 3rd deformConv implementation. If we only want to run the model in TensorRT, we can use the native implementation in the latest ONNX to export .onnx files.
Is my understanding right?
Hi @ZhengPeng7, you’re correct.
It might be worth testing if an onnxruntime session works by specifying the TensorRT execution provider like this:
sess = ort.InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
If that doesn’t work, maybe @jhwei can guide us on how to export and use a TensorRT engine, as there are different approaches that involve using CUDA libraries and low-level configurations.
I also found this new repo: onnx/onnx-tensorrt, which could be useful to test.
Sorry for all the information without testing it myself—my GPU resources are currently limited :((
Thank you, alfausa1, I've tested it but more errors need to be fixed there and more libs needs to be installed. I'll take a deeper look into it when I have spare time.