TensorRT-CenterNet
TensorRT-CenterNet copied to clipboard
how to convert the model to onnx
hello,thank you to open your code! i want to know how to convert the centernet model to onnx , for example, i train a model by our dataset, but original centernet model is *.pth ,so do you have the converting code, if it's ,could you share it?? thank you very much!
` from lib.models.networks.dlav0 import get_pose_net from lib.models.model import load_model import torch.onnx as onnx import torch from types import MethodType
def forward(self,x): x = self.base(x) x = self.dla_up(x[self.first_level:]) ret = [] for head in self.heads: ret.append(self.getattr(head)(x)) return ret
input = torch.zeros([1,3,512,512]) net = get_pose_net(34,{'hm':2,'reg':2,'wh':2}) net.forward = MethodType(forward,net)
load_model(net,'your pth') onnx.export(net,input,'your onnx',verbose = True) `
用您提供的代码去导出模型失败,'DLAseg' object has no attribute 'getattr'
@deep-practice
是这个,这个评论自动转码了。
ret.append(self.__getattr__(head)(x))
@CaoWGG hi, where is the lib.models?
@CaoWGG hi, I find where the lib.models is and run your code trying to convert ctdet_coco_dla_2x.pth provided by centernet into onnx, but get message like this and I think it's error:
Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_1.proj_1.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_1.proj_1.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
And here is my convert code:
from lib.models.networks.dlav0 import get_pose_net
from lib.models.model import load_model
import torch.onnx as onnx
import torch
from types import MethodType
def forward(self, x):
x = self.base(x)
x = self.dla_up(x[self.first_level:])
ret = []
for head in self.heads:
ret.append(self.__getattr__(head)(x))
return ret
input = torch.zeros([1, 3, 512, 512])
net = get_pose_net(34, {'hm':2, 'reg':2, 'wh':2})
net.forward = MethodType(forward, net)
load_model(net, '../models/ctdet_coco_dla_2x.pth')
onnx.export(net, input, '../models/ctdet_coco_dla_2x.onnx', verbose=True)
How should I do to fix this? Thanks!
from lib.models.networks.dlav0 import get_pose_net from lib.models.model import load_model import torch.onnx as onnx import torch from types import MethodType
def forward(self, x): x = self.base(x) x = self.dla_up(x[self.first_level:]) ret = [] for head in self.heads: ret.append(self.getattr(head)(x)) return ret
input = torch.zeros([1, 3, 512, 512]) net = get_pose_net(34, {'hm':2, 'reg':2, 'wh':2}) net.forward = MethodType(forward, net)
load_model(net, '../models/ctdet_coco_dla_2x.pth') onnx.export(net, input, '../models/ctdet_coco_dla_2x.onnx', verbose=True)
for coco, it should be {'hm':80, 'reg':2, 'wh':2}
@lsccccc Still got this error
@lsccccc show your error
HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx
but when load to Tenssortrt Infercence Server got error
== TensorRT Inference Server ==
NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9
HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx
but when load to Tenssortrt Infercence Server got error
== TensorRT Inference Server ==
NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9
@vansondang I'm getting the same error.
@CaoWGG any ideas?
@CaoWGG hi, I find where the
lib.modelsis and run your code trying to convertctdet_coco_dla_2x.pthprovided by centernet intoonnx, but get message like this and I think it's error:Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Drop parameter dla_up.ida_1.proj_1.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset. Drop parameter dla_up.ida_1.proj_1.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.And here is my convert code:
from lib.models.networks.dlav0 import get_pose_net from lib.models.model import load_model import torch.onnx as onnx import torch from types import MethodType def forward(self, x): x = self.base(x) x = self.dla_up(x[self.first_level:]) ret = [] for head in self.heads: ret.append(self.__getattr__(head)(x)) return ret input = torch.zeros([1, 3, 512, 512]) net = get_pose_net(34, {'hm':2, 'reg':2, 'wh':2}) net.forward = MethodType(forward, net) load_model(net, '../models/ctdet_coco_dla_2x.pth') onnx.export(net, input, '../models/ctdet_coco_dla_2x.onnx', verbose=True)How should I do to fix this? Thanks!
Hi, does you fix this? How?
HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx
but when load to Tenssortrt Infercence Server got error
== TensorRT Inference Server ==
NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9
@vansondang I'm getting the same error.
@CaoWGG any ideas?
Did you fix it?
@qianchenghao I did not, I ended up using the dlav0 backbone, which does not use DCNv2.
Hi @CaoWGG I've installed your onnx_tensorrt version, but still get an error converting model_withDCNv2.onnx to tensorrt:
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for Plugin with domain_version of 9
==> Context: Bad node spec: input: "561" input: "562" input: "dla_up.ida_0.proj_1.conv.weight" input: "dla_up.ida_0.proj_1.conv.bias" output: "563" op_type: "Plugin" attribute { name: "info" s: "{"dilation": [1, 1], "padding": [1, 1], "stride": [1, 1], "deformable_groups": 1}" type: STRING } attribute { name: "name" s: "DCNv2" type: STRING }
Is there an additional step I missed for installing DCNv2 onnx parser plugin?
If my model is saved by using CenterNet with original DCN_v2, do I need to replace the DCN_v2 with dcn and retrain the model to ensure it can transform to onxx?
HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx
but when load to Tenssortrt Infercence Server got error
== TensorRT Inference Server ==
NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9
@vansondang I'm getting the same error. @CaoWGG any ideas?
Did you fix it?
Hi, just pass an additional propositional parameter to onnx.utils.export() with operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK
I was able to export lda34 as onnx format.
@sondv7 @ninkin Can I ask if you succeed to convert this model from onnx to tensorrt. It tolds me that DCNv2 is not supported :(
@sondv7 @ninkin Can I ask if you succeed to convert this model from onnx to tensorrt. It tolds me that DCNv2 is not supported :( sorry, I no longer look in this issue.
Hi @CaoWGG I've installed your onnx_tensorrt version, but still get an error converting model_withDCNv2.onnx to tensorrt:
onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for Plugin with domain_version of 9 ==> Context: Bad node spec: input: "561" input: "562" input: "dla_up.ida_0.proj_1.conv.weight" input: "dla_up.ida_0.proj_1.conv.bias" output: "563" op_type: "Plugin" attribute { name: "info" s: "{"dilation": [1, 1], "padding": [1, 1], "stride": [1, 1], "deformable_groups": 1}" type: STRING } attribute { name: "name" s: "DCNv2" type: STRING }
Is there an additional step I missed for installing DCNv2 onnx parser plugin? same as you,has you solved it?