CLRNet-onnxruntime-and-tensorrt-demo
CLRNet-onnxruntime-and-tensorrt-demo copied to clipboard
RuntimeError: Exporting the operator grid_sampler to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
Hi,
When running python torch2onnx.py <config_file> --load_from <pth_file> I get this crash:-
RuntimeError: Exporting the operator grid_sampler to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
@xuanandsix, How did you resolve this at your end? Thanks!
In this code, import grid_sample op ( in clr_head.py line 21 and 126) to replace F.grid_sample (in clr_head.py line 123) Specifically,please refer to the part of conver and test onnx, 3、cp clr_head.py (in this code) to your_path/CLRNet/clrnet/models/heads/ 4、mkdir your_path/CLRNet/modules/ and cp grid_sample.py (in this code) to your_path/CLRNet/modules/
Thanks @xuanandsix!
So, after this change I get this crash:- https://github.com/xuanandsix/CLRNet-onnxruntime-and-tensorrt-demo/issues/3
Do I need to retrain model with this new clr_head.py?
No, this code is to help deploy the trained model from official code.
Hi,"Have you solved the problem? After training using the original clr_head.py, I found that the size of the weights is different from the official weights. However, the trained weights can be used for inference normally, but when converting to onnx, I encountered the same error as you did. However, when I retrained using the new clr_head.py, I encountered the following error: RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! Do you have any good suggestions? Thank you!"