TensorRT-CenterNet icon indicating copy to clipboard operation
TensorRT-CenterNet copied to clipboard

how to convert the model to onnx

Open brucelee78 opened this issue 6 years ago • 19 comments

hello,thank you to open your code! i want to know how to convert the centernet model to onnx , for example, i train a model by our dataset, but original centernet model is *.pth ,so do you have the converting code, if it's ,could you share it?? thank you very much!

brucelee78 avatar Nov 02 '19 12:11 brucelee78

` from lib.models.networks.dlav0 import get_pose_net from lib.models.model import load_model import torch.onnx as onnx import torch from types import MethodType

def forward(self,x): x = self.base(x) x = self.dla_up(x[self.first_level:]) ret = [] for head in self.heads: ret.append(self.getattr(head)(x)) return ret

input = torch.zeros([1,3,512,512]) net = get_pose_net(34,{'hm':2,'reg':2,'wh':2}) net.forward = MethodType(forward,net)

load_model(net,'your pth') onnx.export(net,input,'your onnx',verbose = True) `

CaoWGG avatar Nov 03 '19 01:11 CaoWGG

用您提供的代码去导出模型失败,'DLAseg' object has no attribute 'getattr'

deep-practice avatar Nov 11 '19 12:11 deep-practice

@deep-practice 是这个,这个评论自动转码了。 ret.append(self.__getattr__(head)(x))

CaoWGG avatar Nov 11 '19 12:11 CaoWGG

@CaoWGG hi, where is the lib.models?

murdockhou avatar Dec 10 '19 11:12 murdockhou

@CaoWGG hi, I find where the lib.models is and run your code trying to convert ctdet_coco_dla_2x.pth provided by centernet into onnx, but get message like this and I think it's error:

Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_1.proj_1.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_1.proj_1.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.

And here is my convert code:

from lib.models.networks.dlav0 import get_pose_net
from lib.models.model import load_model
import torch.onnx as onnx
import torch
from types import MethodType

def forward(self, x):
    x = self.base(x)
    x = self.dla_up(x[self.first_level:])
    ret = []
    for head in self.heads:
        ret.append(self.__getattr__(head)(x))
    return ret

input = torch.zeros([1, 3, 512, 512])
net = get_pose_net(34, {'hm':2, 'reg':2, 'wh':2})
net.forward = MethodType(forward, net)

load_model(net, '../models/ctdet_coco_dla_2x.pth')
onnx.export(net, input, '../models/ctdet_coco_dla_2x.onnx', verbose=True)

How should I do to fix this? Thanks!

murdockhou avatar Dec 10 '19 11:12 murdockhou

from lib.models.networks.dlav0 import get_pose_net from lib.models.model import load_model import torch.onnx as onnx import torch from types import MethodType

def forward(self, x): x = self.base(x) x = self.dla_up(x[self.first_level:]) ret = [] for head in self.heads: ret.append(self.getattr(head)(x)) return ret

input = torch.zeros([1, 3, 512, 512]) net = get_pose_net(34, {'hm':2, 'reg':2, 'wh':2}) net.forward = MethodType(forward, net)

load_model(net, '../models/ctdet_coco_dla_2x.pth') onnx.export(net, input, '../models/ctdet_coco_dla_2x.onnx', verbose=True)

for coco, it should be {'hm':80, 'reg':2, 'wh':2}

liukaigua avatar Dec 14 '19 05:12 liukaigua

@lsccccc Still got this error

deep-practice avatar Dec 18 '19 05:12 deep-practice

@lsccccc show your error

CaoWGG avatar Dec 18 '19 07:12 CaoWGG

HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx

but when load to Tenssortrt Infercence Server got error

== TensorRT Inference Server ==

NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9

sondv2 avatar Feb 22 '20 02:02 sondv2

HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx

but when load to Tenssortrt Infercence Server got error

== TensorRT Inference Server ==

NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9

@vansondang I'm getting the same error.

@CaoWGG any ideas?

alagoa avatar Mar 27 '20 16:03 alagoa

@CaoWGG hi, I find where the lib.models is and run your code trying to convert ctdet_coco_dla_2x.pth provided by centernet into onnx, but get message like this and I think it's error:

Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_0.node_1.conv.conv_offset_mask.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_1.proj_1.actf.0.weight.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.
Drop parameter dla_up.ida_1.proj_1.actf.0.bias.If you see this, your model does not fully load the pre-trained weight. Please make sure you have correctly specified --arch xxx or set the correct --num_classes for your own dataset.

And here is my convert code:

from lib.models.networks.dlav0 import get_pose_net
from lib.models.model import load_model
import torch.onnx as onnx
import torch
from types import MethodType

def forward(self, x):
    x = self.base(x)
    x = self.dla_up(x[self.first_level:])
    ret = []
    for head in self.heads:
        ret.append(self.__getattr__(head)(x))
    return ret

input = torch.zeros([1, 3, 512, 512])
net = get_pose_net(34, {'hm':2, 'reg':2, 'wh':2})
net.forward = MethodType(forward, net)

load_model(net, '../models/ctdet_coco_dla_2x.pth')
onnx.export(net, input, '../models/ctdet_coco_dla_2x.onnx', verbose=True)

How should I do to fix this? Thanks!

Hi, does you fix this? How?

KevenLee avatar Jun 15 '20 09:06 KevenLee

HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx

but when load to Tenssortrt Infercence Server got error

== TensorRT Inference Server ==

NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9

@vansondang I'm getting the same error.

@CaoWGG any ideas?

Did you fix it?

Jumponthemoon avatar Jun 24 '20 09:06 Jumponthemoon

@qianchenghao I did not, I ended up using the dlav0 backbone, which does not use DCNv2.

alagoa avatar Jun 24 '20 11:06 alagoa

Hi @CaoWGG I've installed your onnx_tensorrt version, but still get an error converting model_withDCNv2.onnx to tensorrt:

onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for Plugin with domain_version of 9

==> Context: Bad node spec: input: "561" input: "562" input: "dla_up.ida_0.proj_1.conv.weight" input: "dla_up.ida_0.proj_1.conv.bias" output: "563" op_type: "Plugin" attribute { name: "info" s: "{"dilation": [1, 1], "padding": [1, 1], "stride": [1, 1], "deformable_groups": 1}" type: STRING } attribute { name: "name" s: "DCNv2" type: STRING }

Is there an additional step I missed for installing DCNv2 onnx parser plugin?

austinmw avatar Jul 13 '20 16:07 austinmw

If my model is saved by using CenterNet with original DCN_v2, do I need to replace the DCN_v2 with dcn and retrain the model to ensure it can transform to onxx?

Di-Gu avatar Sep 04 '20 08:09 Di-Gu

HI @CaoWGG I converted to ctdet_coco_dla_2x.onnx

but when load to Tenssortrt Infercence Server got error

== TensorRT Inference Server ==

NVIDIA Release 19.10 (build 8266503) ................ ONNX autofill: Internal: onnx runtime error 10: This is an invalid model. Error in Node: : No Op registered for DCNv2 with domain_version of 9

@vansondang I'm getting the same error. @CaoWGG any ideas?

Did you fix it?

Hi, just pass an additional propositional parameter to onnx.utils.export() with operator_export_type=torch.onnx.OperatorExportTypes.ONNX_ATEN_FALLBACK

I was able to export lda34 as onnx format.

ninkin avatar Feb 05 '21 09:02 ninkin

@sondv7 @ninkin Can I ask if you succeed to convert this model from onnx to tensorrt. It tolds me that DCNv2 is not supported :(

minhhoangbui avatar Aug 11 '21 02:08 minhhoangbui

@sondv7 @ninkin Can I ask if you succeed to convert this model from onnx to tensorrt. It tolds me that DCNv2 is not supported :( sorry, I no longer look in this issue.

ninkin avatar Aug 12 '21 03:08 ninkin

Hi @CaoWGG I've installed your onnx_tensorrt version, but still get an error converting model_withDCNv2.onnx to tensorrt:

onnx.onnx_cpp2py_export.checker.ValidationError: No Op registered for Plugin with domain_version of 9 ==> Context: Bad node spec: input: "561" input: "562" input: "dla_up.ida_0.proj_1.conv.weight" input: "dla_up.ida_0.proj_1.conv.bias" output: "563" op_type: "Plugin" attribute { name: "info" s: "{"dilation": [1, 1], "padding": [1, 1], "stride": [1, 1], "deformable_groups": 1}" type: STRING } attribute { name: "name" s: "DCNv2" type: STRING }

Is there an additional step I missed for installing DCNv2 onnx parser plugin? same as you,has you solved it?

hitbuyi avatar Sep 29 '22 12:09 hitbuyi