EfficientNet-PyTorch icon indicating copy to clipboard operation
EfficientNet-PyTorch copied to clipboard

Error when I load the onnx model to the tensorrt format

Open ShunLu91 opened this issue 4 years ago • 2 comments

Hi,
Thanks for the authors' nice work ! However, when I used the pre-defined efficientnet-b3 model, I have managed to export the onnx model but encountered the intractable problem in the following when I loaded the exported onnx and planned to convert it to the tensorrt format:

In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()

Has anyone ever encountered the same problem and know how to solve it? I have tried several methods below but still haven' t solve this problem:

  • https://github.com/onnx/onnx-tensorrt/blob/master/docs/faq.md
  • https://zhuanlan.zhihu.com/p/403793002
  • https://github.com/onnx/onnx-tensorrt/issues/411

Hope someone can help me get out of the dilemma. ( I originally intended to attach the onnx file in the end but it exceeds the size limit. :( )

ShunLu91 avatar Sep 16 '21 15:09 ShunLu91

Hi, Thanks for the authors' nice work ! However, when I used the pre-defined efficientnet-b3 model, I have managed to export the onnx model but encountered the intractable problem in the following when I loaded the exported onnx and planned to convert it to the tensorrt format:

In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()

Has anyone ever encountered the same problem and know how to solve it? I have tried several methods below but still haven' t solve this problem:

Hope someone can help me get out of the dilemma. ( I originally intended to attach the onnx file in the end but it exceeds the size limit. :( )

Try this: model = EfficientNet.from_name(arch, {'num_classes': num_classes}) model.set_swish(memory_efficient=False) also set opset_version=10,encounter error when 11. Maybe depends on the pytorch version. torch.onnx.export(model, tensor, of, input_names=['input'], output_names=['output'], opset_version=10) At last, use onnx2trt to convert the model. Just have a try, good luck!

pytorch 1.8.1 tensorrt 7.1.3

JackyWang-001 avatar Sep 27 '21 07:09 JackyWang-001

Hi, Thanks for the authors' nice work ! However, when I used the pre-defined efficientnet-b3 model, I have managed to export the onnx model but encountered the intractable problem in the following when I loaded the exported onnx and planned to convert it to the tensorrt format:

In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()

Has anyone ever encountered the same problem and know how to solve it? I have tried several methods below but still haven' t solve this problem:

Hope someone can help me get out of the dilemma. ( I originally intended to attach the onnx file in the end but it exceeds the size limit. :( )

Try this: model = EfficientNet.from_name(arch, {'num_classes': num_classes}) model.set_swish(memory_efficient=False) also set opset_version=10,encounter error when 11. Maybe depends on the pytorch version. torch.onnx.export(model, tensor, of, input_names=['input'], output_names=['output'], opset_version=10) At last, use onnx2trt to convert the model. Just have a try, good luck!

pytorch 1.8.1 tensorrt 7.1.3

Thanks for your nice help and the above code really works! But when I exported 'efficientnet-b1‘ model in this environment, I found another problem as below:

  • The original model inference time is 0.3499491214752197 on V100 GPU.
  • The quantizalized model inference time is 0.43619203567504883 on the same device. Above results are really strange and there is no error. Did you encounter the same problem?

ShunLu91 avatar Oct 05 '21 11:10 ShunLu91