TensorRT
TensorRT copied to clipboard
🐛 [Bug] Unsupported operator: aten::lstm.input, aten::Int.Tensor
Bug Description
ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript. Unsupported operators listed below:
- aten::lstm.input(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor)
- aten::Int.Tensor(Tensor a) -> (int)
Additional context
CRNN model conversion (example https://github.com/GitYCC/crnn-pytorch)
I get the following error while compiling YOLOR module:
ERROR: [Torch-TensorRT] - Unsupported operator: aten::Int.Tensor(Tensor a) -> (int)
Also getting the following error:
ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript.
Unsupported operators listed below:
- aten::lstm.input(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor)
- aten::zeros_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor)
When trying to get a bidirectional LSTM converted by trt:
import torch_tensorrt
# The compiled module will have precision as specified by "op_precision".
# Here, it will have FP16 precision.
trt_model_fp32 = torch_tensorrt.compile(traced_model, **{
#"inputs": [torch_tensorrt.Input((128, 3, 224, 224), dtype=torch.float32)],
"inputs": [torch_tensorrt.Input( x.shape, dtype=torch.int32)],
"enabled_precisions": {torch.float32}, # Run with FP32
"workspace_size": 1 << 22
})
Previously we had an additional error, that we fixed by using the torch.transpose(x,0,1)
method in the forward call instead of x.T
.
But it was a lot easier to debug that since the error hinted at numpy operation incompatibily:
- aten::numpy_T(Tensor(a) self) -> (Tensor(a))
But the errors that remain as mentioned above, are more cryptic.
got the same error while compiling BiLSTM nn.LSTM
The outstanding layers have been added as feature requests. Adding the bug tag since require_full_compilation = False
so partial compilation should work.
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days