TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

🐛 [Bug] Unsupported operator: aten::lstm.input, aten::Int.Tensor

Open ilgrad opened this issue 3 years ago • 5 comments

Bug Description

ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript. Unsupported operators listed below:

  • aten::lstm.input(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor)
  • aten::Int.Tensor(Tensor a) -> (int)

Additional context

CRNN model conversion (example https://github.com/GitYCC/crnn-pytorch)

ilgrad avatar Nov 12 '21 01:11 ilgrad

I get the following error while compiling YOLOR module:

ERROR: [Torch-TensorRT] - Unsupported operator: aten::Int.Tensor(Tensor a) -> (int)

rodja avatar Dec 19 '21 13:12 rodja

Also getting the following error:

ERROR: [Torch-TensorRT] - Method requested cannot be compiled by Torch-TensorRT.TorchScript.
Unsupported operators listed below:
  - aten::lstm.input(Tensor input, Tensor[] hx, Tensor[] params, bool has_biases, int num_layers, float dropout, bool train, bool bidirectional, bool batch_first) -> (Tensor, Tensor, Tensor)
  - aten::zeros_like(Tensor self, *, int? dtype=None, int? layout=None, Device? device=None, bool? pin_memory=None, int? memory_format=None) -> (Tensor)

When trying to get a bidirectional LSTM converted by trt:

import torch_tensorrt

# The compiled module will have precision as specified by "op_precision".
# Here, it will have FP16 precision.
trt_model_fp32 = torch_tensorrt.compile(traced_model, **{
    #"inputs": [torch_tensorrt.Input((128, 3, 224, 224), dtype=torch.float32)],
    "inputs": [torch_tensorrt.Input( x.shape, dtype=torch.int32)],
    "enabled_precisions": {torch.float32}, # Run with FP32
    "workspace_size": 1 << 22
})

Previously we had an additional error, that we fixed by using the torch.transpose(x,0,1) method in the forward call instead of x.T.

But it was a lot easier to debug that since the error hinted at numpy operation incompatibily:

- aten::numpy_T(Tensor(a) self) -> (Tensor(a))

But the errors that remain as mentioned above, are more cryptic.

a1ultima avatar Jan 10 '22 17:01 a1ultima

got the same error while compiling BiLSTM nn.LSTM

phybrain avatar Jan 25 '22 07:01 phybrain

The outstanding layers have been added as feature requests. Adding the bug tag since require_full_compilation = False so partial compilation should work.

ncomly-nvidia avatar Apr 25 '22 16:04 ncomly-nvidia

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Aug 17 '22 00:08 github-actions[bot]

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Nov 21 '22 00:11 github-actions[bot]

This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days

github-actions[bot] avatar Mar 17 '23 00:03 github-actions[bot]