torch2trt
torch2trt copied to clipboard
AttributeError: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size'
When I run the Usage demo
import torch
from torch2trt import torch2trt
from torchvision.models.alexnet import alexnet
# create some regular pytorch model...
model = alexnet(pretrained=True).eval().cuda()
# create example data
x = torch.ones((1, 3, 224, 224)).cuda()
# convert to TensorRT feeding sample data as input
model_trt = torch2trt(model, [x])
An error occurs:
AttributeError Traceback (most recent call last)
~/Documents/github/fast-reid/demo/convert2trt.py in <module>
----> 1 model_trt = torch2trt(model_alex, [x])
~/anaconda3/envs/detect2/lib/python3.6/site-packages/torch2trt-0.2.0-py3.6.egg/torch2trt/torch2trt.py in torch2trt(module, inputs, input_names, output_names, log_level, max_batch_size, fp16_mode, max_workspace_size, strict_type_constraints, keep_network, int8_mode, int8_calib_dataset, int8_calib_algorithm, int8_calib_batch_size, use_onnx, **kwargs)
546 ctx.mark_outputs(outputs, output_names)
547
--> 548 builder.max_workspace_size = max_workspace_size
549 builder.fp16_mode = fp16_mode
550 builder.max_batch_size = max_batch_size
AttributeError: 'tensorrt.tensorrt.Builder' object has no attribute 'max_workspace_size'
What is the problem? Many thanks!
Hi, I'm facing the same issue. I checked all the API reference and the code is written correctly. I will check back with you if I have resolved it.
Got the same error when running using TensorRT Python 8.0.0.3.
With nvidia-tensorrt-7.2.3.4 it works fine.
!pip install nvidia-tensorrt==7.2.* --index-url https://pypi.ngc.nvidia.com
Yup, change back to TensorRT-7 and it work fine.
If you are facing any issues, I suggest to just remove and re-clone the whole repo. You'll need to change the is not
to !=
in the dummy_converters.py.
I have come across the same problem. can anyone solve this problem?
https://github.com/gcunhase/torch2trt clone and install this one is work for me
Got the same error when running using TensorRT Python 8.0.0.3. With nvidia-tensorrt-7.2.3.4 it works fine.
!pip install nvidia-tensorrt==7.2.* --index-url https://pypi.ngc.nvidia.com
sry. I try this on the jetson nano. it turns out:
Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.ngc.nvidia.com ERROR: Could not find a version that satisfies the requirement nvidia-tensorrt==7.2.* (from versions: none) ERROR: No matching distribution found for nvidia-tensorrt==7.2.*
I have no idea.help me
I am (trying) using Tensorrt 8.2.0.6. I got the same issue. Tried the above modified repo mentioned by @liuanhua110 but it did not work, same error.
So going to roll back to tensorrt 7.1.0.16 (which I know work since the code I am running works on another machine with tensorrt 7 installed there).
Mentioning this here though because I spent several hours installing tensorrt 8.2 to find out there is this compatibility issue going on.
Come on NVIDIA!
TensorRT API was updated in 8.0.1 so you need to use different commands now. As stated in their release notes "ICudaEngine.max_workspace_size" and "Builder.build_cuda_engine()" among other deprecated functions were removed. (see https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-8.html#rel_8-0-1)
The current usage that worked for me:
- to set max_workspace_size
config = builder.create_builder_config() config.max_workspace_size = 1 << 28
-and to build engine:
plan = builder.build_serialized_network(network, config) engine = runtime.deserialize_cuda_engine(plan)
--- a/python/app_ScatterND_plugin.py
+++ b/python/app_ScatterND_plugin.py
@@ -36,7 +36,8 @@ def build_engine(shape_data, shape_indices, shape_updates):
exit()
builder = trt.Builder(logger)
- builder.max_workspace_size = 1 << 20
+ config = builder.create_builder_config()
+ config.max_workspace_size = 1 << 20
network = builder.create_network(flags=1 << int(trt.NetworkDefinitionCreationFlag.EXPLICIT_BATCH))
tensor_data = network.add_input('data', trt.DataType.FLOAT, shape_data)
@@ -49,8 +50,11 @@ def build_engine(shape_data, shape_indices, shape_updates):
]))
)
network.mark_output(layer.get_output(0))
+ plan = builder.build_serialized_network(network, config)
- return builder.build_cuda_engine(network)
+ with trt.Runtime(logger) as runtime:
+ engine = runtime.deserialize_cuda_engine(plan)
+ return engine
`
Thanks, guys. Worked for me.
Got the same error when running using TensorRT Python 8.0.0.3. With nvidia-tensorrt-7.2.3.4 it works fine.
!pip install nvidia-tensorrt==7.2.* --index-url https://pypi.ngc.nvidia.com
sry. I try this on the jetson nano. it turns out:
Defaulting to user installation because normal site-packages is not writeable Looking in indexes: https://pypi.ngc.nvidia.com ERROR: Could not find a version that satisfies the requirement nvidia-tensorrt==7.2.* (from versions: none) ERROR: No matching distribution found for nvidia-tensorrt==7.2.*
have you solved? i come across the same issue
Same issue on 8.4.1.5
, but for 'build_cuda_engine'
for this issue this solution is worked fine with me
builder = trt.Builder(TRT_LOGGER)
builder_config = builder.create_builder_config()
builder_config.max_workspace_size = 1 << 30
builder.max_batch_size = 1
I have no idea.help me
Have you solved this issue eventually? Been struggling with this one now
Same issue on
8.4.1.5
, but for'build_cuda_engine'
Have you solved solved this problem eventually? If so, could you share your solution?
I have solved the problem Could you pls send me the exact error
You need to refer the tensorRt document Official git page
On Thu, 19 Oct, 2023, 14:13 Kirill Klimushin, @.***> wrote:
Same issue on 8.4.1.5, but for 'build_cuda_engine' Have you solved solved this problem eventually? If so, could you share your solution?
— Reply to this email directly, view it on GitHub https://github.com/NVIDIA-AI-IOT/torch2trt/issues/557#issuecomment-1770340594, or unsubscribe https://github.com/notifications/unsubscribe-auth/BAVOFMIJHP5QRIFS4YQHQALYADRZJAVCNFSM44YR26TKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCNZXGAZTIMBVHE2A . You are receiving this because you commented.Message ID: @.***>
Hey, I also had this problem with TensorRT 10 and CUDA 12.1. I managed to fix it by uninstalling everything from CUDA and TensorRT and redownloading CUDA 11.8 and TensorRT 8.