torch2trt
torch2trt copied to clipboard
UnboundLocalError when using torch2trt with MultiLabelSoftMarginLoss
Description:
I encountered an UnboundLocalError when trying to convert a MultiLabelSoftMarginLoss
module using torch2trt. The error occurs when the dim parameter is not provided to the torch.mean function within the torch2trt conversion process.
Traceback (most recent call last):
model_trt = torch2trt(model, input_data)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch2trt-0.4.0-py3.9.egg/torch2trt/torch2trt.py", line 779, in torch2trt
outputs = module(*inputs)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1568, in _call_impl
result = forward_call(*args, **kwargs)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch/nn/modules/loss.py", line 1228, in forward
return F.multilabel_soft_margin_loss(input, target, weight=self.weight, reduction=self.reduction)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch2trt-0.4.0-py3.9.egg/torch2trt/torch2trt.py", line 301, in wrapper
outputs = method(*args, **kwargs)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch/nn/functional.py", line 3487, in multilabel_soft_margin_loss
ret = loss.mean()
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch2trt-0.4.0-py3.9.egg/torch2trt/torch2trt.py", line 310, in wrapper
converter["converter"](ctx)
File "/root/miniconda3/envs/nnsmith/lib/python3.9/site-packages/torch2trt-0.4.0-py3.9.egg/torch2trt/converters/mean.py", line 19, in convert_mean
if isinstance(dim, list):
UnboundLocalError: local variable 'dim' referenced before assignment
Reproduce:
Here is a minimal script to reproduce the issue:
import torch
from torch.nn import Module
from torch2trt import torch2trt
model = torch.nn.MultiLabelSoftMarginLoss().eval().cuda()
input_data=[torch.randn([5, 10], dtype=torch.float32).cuda(), torch.randn([5, 10], dtype=torch.float32).cuda()]
model_trt = torch2trt(model, input_data)
Environment
- torch: 2.1.1
- torch2trt: 0.4.0
- tensorrt: 8.6.1
I encountered the same problem with Softmax
operator when I execute the script:
import torch
from torch.nn import Module
from torch2trt import torch2trt
input_data = torch.randn([16, 16, 0], dtype=torch.float32).cuda()
class softmax(Module):
def forward(self, *args):
return torch.nn.functional.softmax(args[0], )
model = softmax().float().eval().cuda()
model_trt = torch2trt(model, [input_data])