TensorRT
TensorRT copied to clipboard
🐛 [Bug] torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults
Error: torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults: File "/home/user/anaconda3/envs/torchtensorrt/lib/python3.8/site-packages/torch/utils/checkpoint.py", line 145 def checkpoint(function, *args, **kwargs): ~~~~~~~ <--- HERE
I was using Swin Transformer(source code [https://github.com/microsoft/Swin-Transformer/blob/main/models/swin_transformer.py] with a little specification on variable types) for inference. Torch-TensorRT worked well with mobilenet and resnet. but I don't know why this bug popped up.
Full info:
Traceback (most recent call last):
File "/home/user/Projects/.../swin_eval.py", line 43, in
Environment:
- Torch-TensorRT 1.0.0:
- PyTorch 1.10.0:
- CPU Architecture:x86
- OS : CentOS 7
- How you installed PyTorch (
conda
,pip
,libtorch
, source): Conda - Python version:3.8.13
- CUDA version:11.6
- GPU models and configuration:Nvidia RTX 3080Ti
This is a limitation of TorchScript. Likely if you try to do torch.jit.script
(which we do internally in torch_tensorrt.compile
) you will see this error before passing it to torch_tensorrt.compile
I got the same error for the following model:
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.linear_relu_stack = torch.nn.ModuleList([
torch.nn.Conv2d(3, 32, 3,stride=1, padding=1),
torch.nn.ReLU(inplace=True),
torch.nn.BatchNorm2d(32),
torch.nn.Conv2d(32, 32, 3,stride=1, padding=1),
torch.nn.ReLU(inplace=True),
torch.nn.BatchNorm2d(32),
torch.nn.Conv2d(32, 32, 3,stride=1, padding=1),
torch.nn.ReLU(inplace=True),
torch.nn.BatchNorm2d(32),])
def forward(self, x):
x = self.flatten(x)
logits = self.linear_relu_stack(x)
# for _ in range(10):
# logits += 10
return logits
Could you elaborate as to what exactly is the limitation of TorchScript?
I have the same problem.
<!-- A clear and concise description of what the bug is. -->
torch.jit.frontend.NotSupportedError: Compiled functions can't take variable number of arguments or use keyword-only arguments with defaults:
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 147
def forward(self, *inputs, **kwargs):
~~~~~~~ <--- HERE
with torch.autograd.profiler.record_function("DataParallel.forward"):
if not self.device_ids:
Anyone can help?
Same here. Any idea how to fix it?
Yes any leads on how to fix the issue ?
As @narendasan mentioned, this is a limitation of Torchscript itself. If you need the model to be run in Torchscript, please file an issue to pytorch/pytorch.
Alternatively, in v1.2 Torch-TRT will have an FX frontend (Beta) which you can experiment with to workaround these Torchscript limitations! Details coming soon!
I've got the same issue can any one please elaborate what exactly is the limitation and how to avoid it? thanks!
The way to avoid this issue is to change the input signature of the offending function to not use **kwargs and instead expand out the arguments.
As @narendasan mentioned, this is a limitation of Torchscript itself. If you need the model to be run in Torchscript, please file an issue to pytorch/pytorch.
Alternatively, in v1.2 Torch-TRT will have an FX frontend (Beta) which you can experiment with to workaround these Torchscript limitations! Details coming soon!
Hi! Can you please help me understand how I can use Torch-TensorRT FX to workaround this limitation? I am trying to script my PyTorch model (trained on GPU) using torch.jit.script
but I encounter the same error.
As @narendasan mentioned, this is a limitation of Torchscript itself. If you need the model to be run in Torchscript, please file an issue to pytorch/pytorch. Alternatively, in v1.2 Torch-TRT will have an FX frontend (Beta) which you can experiment with to workaround these Torchscript limitations! Details coming soon!
Hi! Can you please help me understand how I can use Torch-TensorRT FX to workaround this limitation? I am trying to script my PyTorch model (trained on GPU) using
torch.jit.script
but I encounter the same error.
have you solved it yet?