Yolov5-TensorRT
Yolov5-TensorRT copied to clipboard
UNSUPPORTED_NODE: Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
Hi.
Thank you for your great works.
For converting custom ONNX model to TensorRT model, i used your codes.
But cannot convert.
Error message is below:
[TensorRT] WARNING: onnx2trt_utils.cpp:362: Your ONNX model has been generated with INT64 weights, while TensorRT does notnatively support INT64. Attempting to cast down to INT32.
In node 251 (importResize): UNSUPPORTED_NODE: Assertion failed: scales.is_weights() && "Resize scales must be an initializer!"
[TensorRT] ERROR: 4: [network.cpp::validate::2411] Error Code 4: Internal Error (Network must have at least one output)
Traceback (most recent call last):
File "convert.py", line 78, in
How to fix it?
I don't know well about TensorRT and yolov5, so i need your help.
Thanks.
@PeterKim1 could you share the model? If you could share the torch model we could take a deeper look because it seems there is something wrong with the ONNX model.
If you cannot share the model: FYI, TensorRT only support scale factor as constant weights, that’s why the error happened. So, run your ONNX model through netron to see if your scale factor is constant in whichever layer(BatchNorm) it is used.
@PeterKim1 more discussion here:
- https://github.com/NVIDIA/TensorRT/issues/825
- https://github.com/WongKinYiu/yolov7/issues/812
- https://forums.developer.nvidia.com/t/assertion-failed-scales-is-weights-resize-scales-must-be-an-initializer/147342
-> you might want to change the opset and see if it helps
@PeterKim1 I had the same issue, I used the trtexec utility from tensorrt to convert onnx to trt and it worked. Maybe you can try that