mmcv
mmcv copied to clipboard
How can I convert a torch model with mmcv::DeformConv2dPack to onnx or tensorRT?
What is the feature?
I would like use DeformConv2d
in my model, and I find mmcv have a good implementation. So I used this module in my work as below.
from mmcv.ops import DeformConv2dPack as DCN
class DeformConv(nn.Module):
def __init__(self, chi, cho):
super(DeformConv, self).__init__()
self.actf = nn.Sequential(
nn.BatchNorm2d(cho, momentum=BN_MOMENTUM),
nn.ReLU(inplace=True)
)
self.conv = DCN(chi, cho, kernel_size=(3, 3), stride=1, padding=1, dilation=1, deform_groups=1)
def forward(self, x):
x = self.conv(x)
x = self.actf(x)
return x
It works well when training. But I would like convert the torch model to onnx by this code
import torch
torch.onnx.export(model,
dummy_input,
'dla_34_best.onnx',
input_names=['input'],
verbose=True)
But there will be some warnings as :
WARNING: The shape inference of mmcv::MMCVDeformConv2d type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
WARNING: The shape inference of mmcv::MMCVDeformConv2d type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
......
I also try to check onnx model by:
import onnxruntime, onnx
try:
onnx.checker.check_model("dla_34_best.onnx")
except onnx.checker.ValidationError as e:
print("The model is invalid: %s" % e)
else:
print("The model is valid!")
The output will be The model is valid!
.
Then, I would like test it by onnxruntime:
ort_session = onnxruntime.InferenceSession('dla_34_best.onnx')
ort_input = {'input': np.random.randn(1, 3, 1024, 1024).astype(np.float32)}
ort_outputs = ort_session.run(None, ort_input)[0]
Errors happened:
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from dla_34_best.onnx failed:Fatal error: MMCVDeformConv2d is not a registered function/op
How can I safely convert my custom model with mmcv::DeformConv2dPack to onnx? I also find some solution by mmdeploy, but my model is not based on any mm-lib, such as mmdetection, just use a little module.
This is my envirement:
OS: Windows 11
pytorch==1.11.0+cuda113
mmcv-full==1.7.1
Hi @busyyang
You may try assigning opset_version, what's your current opset for onnx exportation ? I can export with opset_version=13.
Best, Lewis
Hi @busyyang
You may try assigning opset_version, what's your current opset for onnx exportation ? I can export with opset_version=13.
Best, Lewis
Thanks for this reply, I just leave this problem here. And I found it does not matter to convert onnx model to tensorRT engine. Testing onnx model is given up, I use the onnx model for tensor RT engine with custom Op from mmdeploy. It works well now in my project.
Hi @busyyang You may try assigning opset_version, what's your current opset for onnx exportation ? I can export with opset_version=13. Best, Lewis
Thanks for this reply, I just leave this problem here. And I found it does not matter to convert onnx model to tensorRT engine. Testing onnx model is given up, I use the onnx model for tensor RT engine with custom Op from mmdeploy. It works well now in my project.
想请教您一下,您是如何绕过onnx的?目前我遇到了同样的困扰。
Hi @busyyang You may try assigning opset_version, what's your current opset for onnx exportation ? I can export with opset_version=13. Best, Lewis
Thanks for this reply, I just leave this problem here. And I found it does not matter to convert onnx model to tensorRT engine. Testing onnx model is given up, I use the onnx model for tensor RT engine with custom Op from mmdeploy. It works well now in my project.
想请教您一下,您是如何绕过onnx的?目前我遇到了同样的困扰。
你用ONNX推理吗,我是用TensorRT推理的,ONNX只是一个中间过程。只要pth转ONNX的时候记住自定义算子的符号就可以了。在转TensorRT模型的时候,把自定义算子的具体实现源码放入trtexec项目将trtexec项目重新编译就可以了。后续使用trtexec就可以转出来TensorRT可以推理的模型。也可以启一个C++工程,使用TensorRT API来转ONNX为TensorRT, 需要把算子实现源码放到这个C++工程中就可以了。
我的后端是昇腾的硬件,我在论坛上咨询了相关人员,他们昇腾芯片说不能按照您说的那种方式,所以我只能先转为onnx格式,同时测试一下转出的onnx是否正确。只能慢慢捣鼓了,感觉mmcv的说明文档太少了。
---Original--- From: @.> Date: Thu, Mar 7, 2024 16:48 PM To: @.>; Cc: @.@.>; Subject: Re: [open-mmlab/mmcv] How can I convert a torch model withmmcv::DeformConv2dPack to onnx or tensorRT? (Issue #2885)
Hi @busyyang You may try assigning opset_version, what's your current opset for onnx exportation ? I can export with opset_version=13. Best, Lewis
Thanks for this reply, I just leave this problem here. And I found it does not matter to convert onnx model to tensorRT engine. Testing onnx model is given up, I use the onnx model for tensor RT engine with custom Op from mmdeploy. It works well now in my project.
想请教您一下,您是如何绕过onnx的?目前我遇到了同样的困扰。
你用ONNX推理吗,我是用TensorRT推理的,ONNX只是一个中间过程。只要pth转ONNX的时候记住自定义算子的符号就可以了。在转TensorRT模型的时候,把自定义算子的具体实现源码放入trtexec项目将trtexec项目重新编译就可以了。后续使用trtexec就可以转出来TensorRT可以推理的模型。也可以启一个C++工程,使用TensorRT API来转ONNX为TensorRT, 需要把算子实现源码放到这个C++工程中就可以了。
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
在查这个问题的时候,看到一些解决方案好像是重新自己编译安装mmcv库,吧自己需要的一些算子加进去,我嫌太麻烦没有这样搞,可以试试。mmdeploy里面有一些算子的实现可以参考: https://github.com/open-mmlab/mmdeploy/tree/main/csrc/mmdeploy/backend_ops/onnxruntime/modulated_deform_conv
好的,谢谢
---Original--- From: @.> Date: Sat, Mar 9, 2024 22:24 PM To: @.>; Cc: @.@.>; Subject: Re: [open-mmlab/mmcv] How can I convert a torch model withmmcv::DeformConv2dPack to onnx or tensorRT? (Issue #2885)
在查这个问题的时候,看到一些解决方案好像是重新自己编译安装mmcv库,吧自己需要的一些算子加进去,我嫌太麻烦没有这样搞,可以试试。mmdeploy里面有一些算子的实现可以参考: https://github.com/open-mmlab/mmdeploy/tree/main/csrc/mmdeploy/backend_ops/onnxruntime/modulated_deform_conv
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>