pan_pp.pytorch
pan_pp.pytorch copied to clipboard
关于转onnx的问题
我看过您的convertor代码,已经成功转换为cpu版本,可以运行,大约11s一张图片。 为了进一步提升速度,我尝试转onnx,但是遇到了问题,还请指教,给出正确的转换方法,代码如下(写在TestModel.py的init方法的model.load_state_dict(d)之后): import onnx import onnxruntime export_onnx_file = './net.onnx' torch.onnx.export(model, torch.randn(1,1,224,224,device='cuda'), export_onnx_file, verbose=False, input_names = ["inputs"]+["params_%d"%i for i in range(120)], output_names = ["outputs"], opset_version = 10 do_constant_folding = True, dynamic_axes = {"inputs":{0:"batch_size"}, 2:"h", 3:"w", "outputs":{0: "batch_size"}})
net = onnx.load('./net.onnx')
onnx.checker.check_model(net)
onnx.helper.printable_graph(net.graph)
I think it's because tensor tracking is impossible in the upsampling process.
Try this!
fpem_v2.py
def _upsample_add(self, x, y):
# _, _, H, W = y.size()
# return F.interpolate(x, size=(H, W), mode='bilinear') + y
_, _, H, W = y.size()
upsample = nn.Upsample(size=(H, W), mode='bilinear')#, align_corners=True)
return upsample(x) + y
pan_pp.py
def _upsample(self, x, size, scale=1):
# _, _, H, W = size
# return F.interpolate(x, size=(H // scale, W // scale), mode='bilinear')
_, _, H, W = size
upsample = nn.Upsample(size=(H // scale, W // scale), mode='bilinear')#, align_corners=True)
return upsample(x)
export2onnx
dynamic_axes = {
'in': {
0: 'batch',
2: 'Width',
3: 'Height'
},
'out': {
0: 'batch',
2: 'Height',
3: 'Width'
}
}
torch.onnx.export(
model,
inputData,
"test.onnx",
input_names=["in"],
output_names=["out"],
dynamic_axes=dynamic_axes,
)
请问这样的改动,是否需要重新训练?然后再生成onnx?
您这个了的inputData,是我之前在代码中提供的值吗?
能否同时支持cpu和gpu?
Please check this code!