Error when I load the onnx model to the tensorrt format
Hi,
Thanks for the authors' nice work !
However, when I used the pre-defined efficientnet-b3 model, I have managed to export the onnx model but encountered the intractable problem in the following when I loaded the exported onnx and planned to convert it to the tensorrt format:
In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()
Has anyone ever encountered the same problem and know how to solve it? I have tried several methods below but still haven' t solve this problem:
- https://github.com/onnx/onnx-tensorrt/blob/master/docs/faq.md
- https://zhuanlan.zhihu.com/p/403793002
- https://github.com/onnx/onnx-tensorrt/issues/411
Hope someone can help me get out of the dilemma. ( I originally intended to attach the onnx file in the end but it exceeds the size limit. :( )
Hi, Thanks for the authors' nice work ! However, when I used the pre-defined efficientnet-b3 model, I have managed to export the onnx model but encountered the intractable problem in the following when I loaded the exported onnx and planned to convert it to the tensorrt format:
In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()
Has anyone ever encountered the same problem and know how to solve it? I have tried several methods below but still haven' t solve this problem:
- https://github.com/onnx/onnx-tensorrt/blob/master/docs/faq.md
- https://zhuanlan.zhihu.com/p/403793002
- Stand-alone pad operation fails with: Assertion failed: inputs.at(1).is_weights() onnx/onnx-tensorrt#411
Hope someone can help me get out of the dilemma. ( I originally intended to attach the onnx file in the end but it exceeds the size limit. :( )
Try this: model = EfficientNet.from_name(arch, {'num_classes': num_classes}) model.set_swish(memory_efficient=False) also set opset_version=10,encounter error when 11. Maybe depends on the pytorch version. torch.onnx.export(model, tensor, of, input_names=['input'], output_names=['output'], opset_version=10) At last, use onnx2trt to convert the model. Just have a try, good luck!
pytorch 1.8.1 tensorrt 7.1.3
Hi, Thanks for the authors' nice work ! However, when I used the pre-defined efficientnet-b3 model, I have managed to export the onnx model but encountered the intractable problem in the following when I loaded the exported onnx and planned to convert it to the tensorrt format:
In node -1 (importPad): UNSUPPORTED_NODE: Assertion failed: inputs.at(1).is_weights()
Has anyone ever encountered the same problem and know how to solve it? I have tried several methods below but still haven' t solve this problem:
- https://github.com/onnx/onnx-tensorrt/blob/master/docs/faq.md
- https://zhuanlan.zhihu.com/p/403793002
- Stand-alone pad operation fails with: Assertion failed: inputs.at(1).is_weights() onnx/onnx-tensorrt#411
Hope someone can help me get out of the dilemma. ( I originally intended to attach the onnx file in the end but it exceeds the size limit. :( )
Try this: model = EfficientNet.from_name(arch, {'num_classes': num_classes}) model.set_swish(memory_efficient=False) also set opset_version=10,encounter error when 11. Maybe depends on the pytorch version. torch.onnx.export(model, tensor, of, input_names=['input'], output_names=['output'], opset_version=10) At last, use onnx2trt to convert the model. Just have a try, good luck!
pytorch 1.8.1 tensorrt 7.1.3
Thanks for your nice help and the above code really works! But when I exported 'efficientnet-b1‘ model in this environment, I found another problem as below:
- The original model inference time is 0.3499491214752197 on V100 GPU.
- The quantizalized model inference time is 0.43619203567504883 on the same device. Above results are really strange and there is no error. Did you encounter the same problem?