torch2trt icon indicating copy to clipboard operation
torch2trt copied to clipboard

Error when running the example on the repository's homepage.

Open Gasoonjia opened this issue 3 years ago • 2 comments

I tried to run the example shown on the homepage:

import torch
from torch2trt import torch2trt
from torchvision.models.alexnet import alexnet

device = torch.device("cpu")

# create some regular pytorch model...
model = alexnet(pretrained=True).to(device)

# create example data
x = torch.ones((1, 3, 224, 224)).to(device)

# convert to TensorRT feeding sample data as input
model_trt = torch2trt(model, [x])

y = model(x)
y_trt = model_trt(x)

# check the output against PyTorch
print(torch.max(torch.abs(y - y_trt)))

which did not work and encountered the following error:

 /usr/local/lib/python3.6/dist-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at  /media/nvidia/NVME/pytorch/pytorch-v1.9.0/c10/core/TensorImpl.h:1156.)
  return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode)

[TensorRT] ERROR: 4: Tensor: output_0 trying to set to TensorLocation::kHOST but only kDEVICE is supported (only network inputs may be on host)
[TensorRT] ERROR: 4: [network.cpp::validate::2506] Error Code 4: Internal Error (Tensor: input_0 set to TensorLocation::kHOST but only kDEVICE is supported (only RNNv2 allows host input))
Traceback (most recent call last):
  File "test_inference.py", line 17, in <module>
    y_trt = model_trt(x)
  File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl
    return forward_call(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/torch2trt-0.3.0-py3.6.egg/torch2trt/torch2trt.py", line 466, in forward
    idx = self.engine.get_binding_index(output_name)
AttributeError: 'NoneType' object has no attribute 'get_binding_index'

I run the code on Jetson Nano with jetpack version 4.6. And I also run the code and install tensorrt inside the docker given in the setup session.

I have tried several methods to deal with the problem, but no one worked.

Please help and thank in advance!

Gasoonjia avatar Nov 24 '21 05:11 Gasoonjia

You are running your model and the input tensor on a cpu. it should be on a gpu

SrivastavaKshitij avatar Feb 17 '22 16:02 SrivastavaKshitij

I have the same problem, on Jetson Nano with jetpack version 4.6.

njustczr avatar Mar 04 '22 08:03 njustczr