TensorRT
TensorRT copied to clipboard
❓ [Question] How do you build Torch-TensorRT from origin/main with dependence on tensorrt 8.5.2 from Jetpack5.1?
❓ Question
When compiling the latest version of Torch-TensorRT from origin/main
(2.2.0.dev0+76de80d0
) on Jetpack5.1 using the latest locally compiled PyTorch (2.2.0a0+a683bc5
) (so that I can use the latest v2 transforms in TorchVision (0.17.0a0+4cb3d80
)), the resulting python package has a dependence on tensorrt
version 8.6.1
, but Jetpack5.1 only supports version 8.5.2.2-1+cuda11.4
and is thus not installable.
Is it possible to compile the latest Torch-TensorRT with dependence on the installed version of tensorrt
?
Environment
Environment details
br@nx:~/github/torch$ python /tmp/collect_env.py
Collecting environment information...
PyTorch version: 2.2.0a0+a683bc5
Is debug build: False
CUDA used to build PyTorch: 11.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (aarch64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.27.5
Libc version: glibc-2.31
Python version: 3.8.10 (default, May 26 2023, 14:05:08) [GCC 9.4.0] (64-bit runtime)
Python platform: Linux-5.10.104-tegra-aarch64-with-glibc2.29
Is CUDA available: True
CUDA runtime version: 11.4.315
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: Could not collect
Nvidia driver version: Could not collect
cuDNN version: Probably one of the following:
/usr/lib/aarch64-linux-gnu/libcudnn.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_adv_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_cnn_train.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_infer.so.8.6.0
/usr/lib/aarch64-linux-gnu/libcudnn_ops_train.so.8.6.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: aarch64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 6
On-line CPU(s) list: 0-3
Off-line CPU(s) list: 4,5
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 2
Vendor ID: Nvidia
Model: 0
Model name: ARMv8 Processor rev 0 (v8l)
Stepping: 0x0
CPU max MHz: 1907,2000
CPU min MHz: 115,2000
BogoMIPS: 62.50
L1d cache: 256 KiB
L1i cache: 512 KiB
L2 cache: 4 MiB
L3 cache: 4 MiB
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Spec store bypass: Not affected
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Branch predictor hardening
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm dcpop
Versions of relevant libraries:
[pip3] mypy==1.5.1
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.24.4
[pip3] numpy-quaternion==2022.4.3
[pip3] pytorch-ranger==0.1.1
[pip3] tensorrt==8.5.2.2
[pip3] torch==2.2.0a0+a683bc5
[pip3] torch-optimizer==0.3.0
[pip3] torchmetrics==0.11.3
[pip3] torchvision==0.17.0a0+4cb3d80
[conda] Could not collect
Torch and TorchVision are built with
export BUILD_TEST=OFF
export USE_FBGEMM=OFF # Fails to build
export USE_NCCL=OFF # Fails to build
export USE_KINETO=OFF # Fails to build
export BUILD_SPLIT_CUDA=ON # Required so that Torch-TensorRT finds the libraries it needs.
export _GLIBCXX_USE_CXX11_ABI=1 # Use the new C++ ABI
cd ~/github/torch/pytorch
python3 -m build -n
pip install dist/torch-<version>.whl
cd ~/github/torch/vision
python3 setup.py bdist_wheel # Doesn't support the newer build module.
pip install dist/torchvision-<version>.whl
mkdir -p build; cd build
Torch_DIR=~/github/torch/pytorch/torch/share/cmake/Torch cmake -DCMAKE_BUILD_TYPE=Release -Wno-dev -DWITH_CUDA=on -GNinja -DCMAKE_INSTALL_PREFIX=~/.local ..
ninja install
WORKSPACE file used to build Torch-TensorRT on Jetpack5.1. Built with
cd ~/github/torch/Torch-TensorRT
bazel build //:libtorchtrt -c opt
sudo tar -xvzf bazel-bin/libtorchtrt.tar.gz -C /usr/local/
python3 setup.py bdist_wheel --use-cxx11-abi # Doesn't support the newer build module.
pip install dist/torch_tensorrt-<version>.whl # <-- fails to install due to tensorrt==8.6 dependency
Does editing https://github.com/pytorch/TensorRT/blob/main/pyproject.toml help?
Yes, I've managed to build and install it now and it appears to be working correctly. I changed the required version strings in a few files, so I'm not entirely sure which changes are actually required:
br@nx:~/github/torch/Torch-TensorRT$ git diff -w -U0 . ':!WORKSPACE'
diff --git a/dev_dep_versions.yml b/dev_dep_versions.yml
index 874a27cb..2c976c40 100644
--- a/dev_dep_versions.yml
+++ b/dev_dep_versions.yml
@@ -4 +4 @@ __cudnn_version__: "8.8"
-__tensorrt_version__: "8.6"
+__tensorrt_version__: "8.5"
diff --git a/py/requirements.txt b/py/requirements.txt
index f690e1a0..64178dd2 100644
--- a/py/requirements.txt
+++ b/py/requirements.txt
@@ -8 +8 @@ torchvision>=0.17.0.dev,<0.18.0
-tensorrt==8.6.1
+tensorrt>=8.5
diff --git a/pyproject.toml b/pyproject.toml
index 394dc6a6..383ffb66 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -11 +11 @@ requires = [
- "tensorrt>=8.6,<8.7",
+ "tensorrt>=8.5,<8.7",
@@ -47 +47 @@ dependencies = [
- "tensorrt>=8.6,<8.7",
+ "tensorrt>=8.5,<8.7",
diff --git a/toolchains/legacy/pyproject.toml b/toolchains/legacy/pyproject.toml
index ce9e6423..1bf93a8c 100644
--- a/toolchains/legacy/pyproject.toml
+++ b/toolchains/legacy/pyproject.toml
@@ -11 +11 @@ requires = [
- "tensorrt>=8.6,<8.7",
+ "tensorrt>=8.5,<8.7",
@@ -45 +45 @@ dependencies = [
- "tensorrt>=8.6,<8.7",
+ "tensorrt>=8.5,<8.7",
br@nx:~/autosensor$ python3 -c 'import torch, torchvision, tensorrt, torch_tensorrt; print(f"torch: {torch.__version__}, torchvision: {torchvision.__version__}, tensorrt: {tensorrt.__version__}, Torch-TensoRT: {torch_tensorrt.__version__}")'
torch: 2.2.0a0, torchvision: 0.17.0a0+4cb3d80, tensorrt: 8.5.2.2, Torch-TensoRT: 2.2.0.dev0+76de80d0
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days