TensorRT
TensorRT copied to clipboard
TensorRT can not support sort operation from pytorch
TensorRT can not support sort operation from PyTorch?!
if len(indices) > self.n_limit:
kpts_sc = scores[indices]
from qs import qs
sort_idx = kpts_sc.sort(descending=True)[1]
sel_idx = sort_idx[:self.n_limit]
indices = indices[sel_idx]
indices_keypoints.append(indices)
I don't get what the problem is but https://github.com/onnx/onnx-tensorrt/blob/main/docs/operators.md might be helpful
[08/30/2022-16:25:50] [V] [TRT] Using Gather axis: 0 [08/30/2022-16:25:50] [V] [TRT] Registering layer: Gather_91 for ONNX node: Gather_91 [08/30/2022-16:25:50] [V] [TRT] Registering tensor: onnx::TopK_166 for ONNX tensor: onnx::TopK_166 [08/30/2022-16:25:50] [V] [TRT] Gather_91 [Gather] outputs: [onnx::TopK_166 -> (1)[INT32]], [08/30/2022-16:25:50] [V] [TRT] Parsing node: TopK_92 [TopK] [08/30/2022-16:25:50] [V] [TRT] Searching for input: onnx::Shape_163 [08/30/2022-16:25:50] [V] [TRT] Searching for input: onnx::TopK_166 [08/30/2022-16:25:50] [V] [TRT] TopK_92 [TopK] inputs: [onnx::Shape_163 -> (8363)[FLOAT]], [onnx::TopK_166 -> (1)[INT32]], [08/30/2022-16:25:50] [E] [TRT] ModelImporter.cpp:773: While parsing node number 59 [TopK -> "167"]: [08/30/2022-16:25:50] [E] [TRT] ModelImporter.cpp:774: --- Begin node --- [08/30/2022-16:25:50] [E] [TRT] ModelImporter.cpp:775: input: "onnx::Shape_163" input: "onnx::TopK_166" output: "167" output: "onnx::Slice_168" name: "TopK_92" op_type: "TopK" attribute { name: "axis" i: -1 type: INT } attribute { name: "largest" i: 1 type: INT }
I mean when I cover sort_idx = kpts_sc.sort(descending=True)[1] in Pytorch to TensorRT, TensorRT can not support sort operation(TopK_166)
[08/30/2022-16:25:50] [E] [TRT] ModelImporter.cpp:776: --- End node --- [08/30/2022-16:25:50] [E] [TRT] ModelImporter.cpp:779: ERROR: ModelImporter.cpp:163 In function parseGraph: [6] Invalid Node - TopK_92 This version of TensorRT only supports input K as an initializer. Try applying constant folding on the model using Polygraphy: https://github.com/NVIDIA/TensorRT/tree/master/tools/Polygraphy/examples/cli/surgeon/02_folding_constants [08/30/2022-16:25:50] [E] Failed to parse onnx file [08/30/2022-16:25:50] [I] Finish parsing network model [08/30/2022-16:25:50] [E] Parsing model failed [08/30/2022-16:25:50] [E] Failed to create engine from model or file. [08/30/2022-16:25:50] [E] Engine set up failed
This version of TensorRT only supports input K as an initializer. Try applying constant folding on the model using Polygraphy: https://github.com/NVIDIA/TensorRT/tree/master/tools/Polygraphy/examples/cli/surgeon/02_folding_constants
This is why it fails, k must be a constant not a tensor.
and we only support sorted output.
I use a torch. where(mask!=0), but it still has an error of name: "NonZero_559", but https://github.com/onnx/onnx-tensorrt/blob/main/docs/operators.md On this website, tensor-RT support torch. where
torch.where call the torch.nonzer o function
TensorRT can not support sort operation from PyTorch?! Have you solved it yet? I have the same progrom.
NonZero is already supported in latest TRT, closing and thanks all!