TensorRT
TensorRT copied to clipboard
problem build in jetson nano jetpack4.6
❓ Question
Hello I tried to compile the torch-tensorrt on the jetson nano I got this error suggestions please Thanks
bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures
jetson@jetson-desktop:~/TensorRT$ bazel build //:libtorchtrt --platforms //toolchains:jetpack_4.6 --verbose_failures
INFO: Analyzed target //:libtorchtrt (10 packages loaded, 2870 targets configured).
INFO: Found 1 target...
ERROR: /home/jetson/TensorRT/core/lowering/BUILD:10:11: Compiling core/lowering/register_trt_placeholder_ops.cpp failed: (Exit 1): gcc failed: error executing command
(cd /home/jetson/.cache/bazel/bazel_jetson/8770c998fbff2b8d5ee14d56a02ce872/sandbox/linux-sandbox/66/execroot/Torch-TensorRT &&
exec env -
PATH=/home/jetson/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
PWD=/proc/self/cwd
/usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer '-std=c++0x' -MD -MF bazel-out/aarch64-fastbuild/bin/core/lowering/objs/lowering/register_trt_placeholder_ops.pic.d '-frandom-seed=bazel-out/aarch64-fastbuild/bin/core/lowering/objs/lowering/register_trt_placeholder_ops.pic.o' -fPIC -iquote . -iquote bazel-out/aarch64-fastbuild/bin -iquote external/tensorrt -iquote bazel-out/aarch64-fastbuild/bin/external/tensorrt -iquote external/cuda -iquote bazel-out/aarch64-fastbuild/bin/external/cuda -iquote external/cudnn -iquote bazel-out/aarch64-fastbuild/bin/external/cudnn -iquote external/libtorch -iquote bazel-out/aarch64-fastbuild/bin/external/libtorch -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/virtual_includes/ATen -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/virtual_includes/c10_cuda -Ibazel-out/aarch64-fastbuild/bin/external/libtorch/virtual_includes/c10 -isystem external/tensorrt/include/aarch64-linux-gnu -isystem bazel-out/aarch64-fastbuild/bin/external/tensorrt/include/aarch64-linux-gnu -isystem external/cuda/include -isystem bazel-out/aarch64-fastbuild/bin/external/cuda/include -isystem external/cudnn/include -isystem bazel-out/aarch64-fastbuild/bin/external/cudnn/include -isystem external/libtorch/include -isystem bazel-out/aarch64-fastbuild/bin/external/libtorch/include -isystem external/libtorch/include/torch/csrc/api/include -isystem bazel-out/aarch64-fastbuild/bin/external/libtorch/include/torch/csrc/api/include '-fdiagnostics-color=always' '-std=c++14' -fno-canonical-system-headers -Wno-builtin-macro-redefined '-D__DATE="redacted"' '-D__TIMESTAMP="redacted"' '-D__TIME="redacted"' -c core/lowering/register_trt_placeholder_ops.cpp -o bazel-out/aarch64-fastbuild/bin/core/lowering/_objs/lowering/register_trt_placeholder_ops.pic.o)
Configuration: 308cf0c0559d698e898984ad86ba68902429f53ed3b621b21d0881d53f6d42af
Execution platform: @local_config_platform//:host
Use --sandbox_debug to see verbose messages from the sandbox
core/lowering/register_trt_placeholder_ops.cpp:16:34: error: invalid user-defined conversion from 'torch::jit::<lambda(torch::jit::Stack&)>' to 'torch::jit::OperationCreator {aka std::function<void(std::vectorc10::IValue)> ()(const torch::jit::Node*)}' [-fpermissive]
aliasAnalysisFromSchema()),
^
core/lowering/register_trt_placeholder_ops.cpp:15:24: note: candidate is: torch::jit::<lambda(torch::jit::Stack&)>::operator void ()(torch::jit::Stack&)() const
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- PyTorch v1.8.0
- Jetson nano
- How you installed PyTorch (
pip
):
Additional context
The issue is PyTorch 1.8.0 is too old for latest Torch-TensorRT, if you can using PyTorch 1.11.0 is the easiest or using trtorch v0.3.0 would work if you need 1.8.x
Reopen if you need more help