FastDeploy icon indicating copy to clipboard operation
FastDeploy copied to clipboard

GPU编译错误

Open monkeycc opened this issue 7 months ago • 5 comments


温馨提示:根据社区不完全统计,按照模板提问,可以加快回复和解决问题的速度


环境

  • 【FastDeploy版本】: develop 20250422
  • 【编译命令】
set ENABLE_ORT_BACKEND=OFF
set ENABLE_PADDLE_BACKEND=OFF
set ENABLE_OPENVINO_BACKEND=OFF
set ENABLE_VISION=ON
set ENABLE_TEXT=ON
set ENABLE_TRT_BACKEND=Off
set WITH_GPU=ON
set CUDA_DIRECTORY=C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8
set OPENCV_DIRECTORY=D:\AI\opencv\build\x64\vc16\lib
python setup.py build
  • 【系统平台】: Windows x64(Windows11)
  • 【硬件】: Nvidia GPU 5090D, CUDA 12.8 CUDNN 9.8
  • 【编译语言】: anaconda Python3.10

问题日志及出现问题的操作流程

I:\AI\FastDeploy-develop\python\setup.py:121: DeprecationWarning: Use shutil.which instead of find_executable
  CMAKE = find_executable('cmake3') or find_executable('cmake')
I:\AI\FastDeploy-develop\python\setup.py:122: DeprecationWarning: Use shutil.which instead of find_executable
  MAKE = find_executable('make')
running build
running build_py
running create_version
running cmake_build
-- Building for: Visual Studio 17 2022
CMake Warning (dev) at CMakeLists.txt:15 (PROJECT):
  cmake_minimum_required() should be called prior to this top-level project()
  call.  Please see the cmake-commands(7) manual for usage documentation of
  both commands.
This warning is for project developers.  Use -Wno-dev to suppress it.

-- Selecting Windows SDK version 10.0.22621.0 to target Windows 10.0.26100.
-- The C compiler identification is MSVC 19.43.34809.0
-- The CXX compiler identification is MSVC 19.43.34809.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: D:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.43.34808/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: D:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.43.34808/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- The CUDA compiler identification is NVIDIA 12.8.93 with host compiler MSVC 19.43.34809.0
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8/bin/nvcc.exe - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- CUDA compiler: C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8/bin/nvcc.exe, version: NVIDIA 12.8.93
Using New Release Strategy - All Arches Packge
-- CUDA detected: 12.8.93
-- NVCC_FLAGS_EXTRA:  -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86
CMake Deprecation Warning at third_party/yaml-cpp/CMakeLists.txt:2 (cmake_minimum_required):
  Compatibility with CMake < 3.10 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value.  Or, use the <min>...<max> syntax
  to tell CMake that the project requires at least <min> but has been updated
  to work with policies introduced by <max> or earlier.


-- Use the opencv lib specified by user. The OpenCV path: D:/AI/opencv/build/x64/vc16/lib
-- Found OpenCV: D:/AI/opencv/build (found version "4.11.0")
FASTTOKENIZER_COMPILE_LIB = I:/AI/FastDeploy-develop/python/.setuptools-cmake-build/third_libs/install/fast_tokenizer/lib/core_tokenizers.lib
CMake Warning (dev) at D:/Program Files/CMake/share/cmake-4.0/Modules/ExternalProject/shared_internal_commands.cmake:1276 (message):
  The DOWNLOAD_EXTRACT_TIMESTAMP option was not given and policy CMP0135 is
  not set.  The policy's OLD behavior will be used.  When using a URL
  download, the timestamps of extracted files should preferably be that of
  the time of extraction, otherwise code that depends on the extracted
  contents might not be rebuilt if the URL changes.  The OLD behavior
  preserves the timestamps from the archive instead, but this is usually not
  what you want.  Update your project to the NEW behavior or specify the
  DOWNLOAD_EXTRACT_TIMESTAMP option with a value of true to avoid this
  robustness issue.
Call Stack (most recent call first):
  D:/Program Files/CMake/share/cmake-4.0/Modules/ExternalProject.cmake:3076 (_ep_add_download_command)
  cmake/fast_tokenizer.cmake:110 (ExternalProject_Add)
  CMakeLists.txt:459 (include)
This warning is for project developers.  Use -Wno-dev to suppress it.

--
-- *************FastDeploy Building Summary**********
--   CMake version             : 4.0.0
--   CMake command             : D:/Program Files/CMake/bin/cmake.exe
--   System                    : Windows
--   C++ compiler              : D:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.43.34808/bin/Hostx64/x64/cl.exe
--   C++ standard              :
--   C++ cuda standard         : 11
--   C++ compiler version      : 19.43.34809.0
--   CXX flags                 : /DWIN32 /D_WINDOWS /W3 /GR /EHsc
--   EXE linker flags          : /machine:x64
--   Shared linker flags       : /machine:x64
--   Build type                : Release
--   Compile definitions       : YAML_CPP_DLL;FASTDEPLOY_LIB;CMAKE_BUILD_TYPE=Release;EIGEN_STRONG_INLINE=inline;WITH_GPU;ENABLE_NVJPEG;ENABLE_VISION;ENABLE_TEXT
--   CMAKE_PREFIX_PATH         :
--   CMAKE_INSTALL_PREFIX      : C:/Program Files/fastdeploy
--   CMAKE_MODULE_PATH         :
--
--   FastDeploy version        : 0.0.0
--   ENABLE_ORT_BACKEND        : OFF
--   ENABLE_RKNPU2_BACKEND     : OFF
--   ENABLE_HORIZON_BACKEND    : OFF
--   ENABLE_SOPHGO_BACKEND     : OFF
--   ENABLE_PADDLE_BACKEND     : OFF
--   ENABLE_LITE_BACKEND       : OFF
--   ENABLE_POROS_BACKEND      : OFF
--   ENABLE_TRT_BACKEND        : Off
--   ENABLE_OPENVINO_BACKEND   : OFF
--   ENABLE_TVM_BACKEND        : OFF
--   ENABLE_BENCHMARK          : OFF
--   ENABLE_VISION             : ON
--   ENABLE_TEXT               : ON
--   ENABLE_ENCRYPTION         : OFF
--   ENABLE_FLYCV              : OFF
--   ENABLE_CVCUDA             : OFF
--   WITH_GPU                  : ON
--   WITH_IPU                  : OFF
--   WITH_OPENCL               : OFF
--   WITH_TESTING              : OFF
--   WITH_ASCEND               : OFF
--   WITH_DIRECTML             : OFF
--   WITH_TIMVX                : OFF
--   WITH_KUNLUNXIN            : OFF
--   WITH_CAPI                 : OFF
--   WITH_CSHARPAPI            : OFF
--   CUDA_DIRECTORY            : C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v12.8
--   TRT_DRECTORY              : I:/AI/FastDeploy-develop/python/.setuptools-cmake-build/UNDEFINED
--   Python executable         : D:\anaconda3\envs\FastDeploy\python.exe
--   Python includes           : D:\anaconda3\envs\FastDeploy\include
CMake Warning (dev) at CMakeLists.txt:694 (find_package):
  Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
  are removed.  Run "cmake --help-policy CMP0148" for policy details.  Use
  the cmake_policy command to set the policy and suppress this warning.

This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found PythonInterp: D:/anaconda3/envs/FastDeploy/python.exe (found suitable version "3.10.16", minimum required is "3.10")
CMake Warning (dev) at CMakeLists.txt:695 (find_package):
  Policy CMP0148 is not set: The FindPythonInterp and FindPythonLibs modules
  are removed.  Run "cmake --help-policy CMP0148" for policy details.  Use
  the cmake_policy command to set the policy and suppress this warning.

This warning is for project developers.  Use -Wno-dev to suppress it.

-- Found PythonLibs: D:/anaconda3/envs/FastDeploy/libs/python310.lib (found suitable version "3.10.16", minimum required is "3.10")
-- Configuring done (8.2s)
-- Generating done (0.3s)
CMake Warning:
  Manually-specified variables were not used by the project:

    ONNX_NAMESPACE
    PADDLE2ONNX_URL
    PADDLEINFERENCE_API_COMPAT_2_4_x
    PADDLEINFERENCE_API_COMPAT_2_5_x
    PADDLEINFERENCE_API_COMPAT_DEV
    PADDLEINFERENCE_API_CUSTOM_OP
    PADDLEINFERENCE_DIRECTORY
    PADDLEINFERENCE_URL
    PADDLELITE_URL


-- Build files have been written to: I:/AI/FastDeploy-develop/python/.setuptools-cmake-build
适用于 .NET Framework MSBuild 版本 17.13.19+0d9f5a35a

  Checking File Globs
  1>Checking Build System
  Creating directories for 'extern_fast_tokenizer'
  Building Custom Rule I:/AI/FastDeploy-develop/CMakeLists.txt
  Performing download step (download, verify and extract) for 'extern_fast_tokenizer'
  -- Downloading...
     dst='I:/AI/FastDeploy-develop/python/.setuptools-cmake-build/third_libs/fast_tokenizer/src/fast_tokenizer-win-x64-
  1.0.2.zip'
     timeout='none'
     inactivity timeout='none'
  -- Using src='https://bj.bcebos.com/paddlenlp/fast_tokenizer/fast_tokenizer-win-x64-1.0.2.zip'
  -- Downloading... done
  -- extracting...
       src='I:/AI/FastDeploy-develop/python/.setuptools-cmake-build/third_libs/fast_tokenizer/src/fast_tokenizer-win-x6
  4-1.0.2.zip'
       dst='I:/AI/FastDeploy-develop/python/.setuptools-cmake-build/third_libs/fast_tokenizer/src/extern_fast_tokenizer
  '
  -- extracting... [tar xfz]
  -- extracting... [analysis]
  -- extracting... [rename]
  -- extracting... [clean up]
  -- extracting... done
  No update step for 'extern_fast_tokenizer'
  No patch step for 'extern_fast_tokenizer'
  No configure step for 'extern_fast_tokenizer'
  No build step for 'extern_fast_tokenizer'
  Performing install step for 'extern_fast_tokenizer'
  Completed 'extern_fast_tokenizer'
  Building Custom Rule I:/AI/FastDeploy-develop/third_party/yaml-cpp/CMakeLists.txt
cl : 命令行  warning D9025: 正在重写“/W3”(用“/w”) [I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_party\yaml-cpp
\yaml-cpp.vcxproj]
  graphbuilder.cpp
  Defining YAML_CPP_API for DLL export
  graphbuilderadapter.cpp
  Defining YAML_CPP_API for DLL export
  binary.cpp
  Defining YAML_CPP_API for DLL export
  convert.cpp
  Defining YAML_CPP_API for DLL export
  depthguard.cpp
  Defining YAML_CPP_API for DLL export
  directives.cpp
  emit.cpp
  Defining YAML_CPP_API for DLL export
  emitfromevents.cpp
  Defining YAML_CPP_API for DLL export
  emitter.cpp
  Defining YAML_CPP_API for DLL export
  emitterstate.cpp
  Defining YAML_CPP_API for DLL export
  emitterutils.cpp
  Defining YAML_CPP_API for DLL export
  exceptions.cpp
  Defining YAML_CPP_API for DLL export
  exp.cpp
  Defining YAML_CPP_API for DLL export
  memory.cpp
  Defining YAML_CPP_API for DLL export
  node.cpp
  Defining YAML_CPP_API for DLL export
  node_data.cpp
  Defining YAML_CPP_API for DLL export
  nodebuilder.cpp
  Defining YAML_CPP_API for DLL export
  nodeevents.cpp
  Defining YAML_CPP_API for DLL export
  null.cpp
  Defining YAML_CPP_API for DLL export
  ostream_wrapper.cpp
  Defining YAML_CPP_API for DLL export
  正在生成代码...
  正在编译...
  parse.cpp
  Defining YAML_CPP_API for DLL export
  parser.cpp
  Defining YAML_CPP_API for DLL export
  regex_yaml.cpp
  Defining YAML_CPP_API for DLL export
  scanner.cpp
  Defining YAML_CPP_API for DLL export
  scanscalar.cpp
  Defining YAML_CPP_API for DLL export
  scantag.cpp
  Defining YAML_CPP_API for DLL export
  scantoken.cpp
  Defining YAML_CPP_API for DLL export
  simplekey.cpp
  Defining YAML_CPP_API for DLL export
  singledocparser.cpp
  Defining YAML_CPP_API for DLL export
  stream.cpp
  Defining YAML_CPP_API for DLL export
  tag.cpp
  Defining YAML_CPP_API for DLL export
  正在生成代码...
    正在创建库 I:/AI/FastDeploy-develop/python/.setuptools-cmake-build/third_party/yaml-cpp/Release/yaml-cpp.lib 和对象 I:/AI/F
  astDeploy-develop/python/.setuptools-cmake-build/third_party/yaml-cpp/Release/yaml-cpp.exp
  yaml-cpp.vcxproj -> I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_party\yaml-cpp\Release\yaml-cpp.dll
  Building Custom Rule I:/AI/FastDeploy-develop/CMakeLists.txt
  Compiling CUDA source file ..\..\fastdeploy\vision\utils\yolo_preprocess.cu...
  Compiling CUDA source file ..\..\fastdeploy\runtime\backends\common\cuda\adaptive_pool2d_kernel.cu...
  Compiling CUDA source file ..\..\fastdeploy\vision\common\processors\normalize.cu...
  Compiling CUDA source file ..\..\fastdeploy\vision\common\processors\normalize_and_permute.cu...
  Compiling CUDA source file ..\..\fastdeploy\function\cuda_cast.cu...

  (FastDeploy) I:\AI\FastDeploy-develop\python\.setuptools-cmake-build>"C:\Program Files\NVIDIA GPU Computing Toolkit\C
  UDA\v12.8\bin\nvcc.exe"  --use-local-env -ccbin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSV
  C\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\AI\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build" -I"I:\AI\FastDeploy-develop\third_party\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12
  .8\include" -I"I:\AI\FastDeploy-develop\third_party\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build\third_libs\install\fast_tokenizer\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\thi
  rd_libs\install\fast_tokenizer\third_party\include" -ID:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Compu
  ting Toolkit\CUDA\v12.8\include"     --keep-dir fastdeploy\x64\Release  -maxrregcount=0    --machine 64 --compile -cu
  dart static -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_
  52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -genco
  de arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -w --expt-relax
  ed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /wd4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DND
  EBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJP
  EG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL
  _FILE__=\"fastdeploy/function/cuda_cast.cu\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPL
  OY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TE
  XT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -Xcompiler "/EHsc /W0 /nologo /O2 /FS
    /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fastdeploy.dir\Release\cuda_cast.obj "I:\AI\FastDeploy
  -develop\fastdeploy\function\cuda_cast.cu"

  (FastDeploy) I:\AI\FastDeploy-develop\python\.setuptools-cmake-build>"C:\Program Files\NVIDIA GPU Computing Toolkit\C
  UDA\v12.8\bin\nvcc.exe"  --use-local-env -ccbin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSV
  C\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\AI\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build" -I"I:\AI\FastDeploy-develop\third_party\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12
  .8\include" -I"I:\AI\FastDeploy-develop\third_party\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build\third_libs\install\fast_tokenizer\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\thi
  rd_libs\install\fast_tokenizer\third_party\include" -ID:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Compu
  ting Toolkit\CUDA\v12.8\include"     --keep-dir fastdeploy\x64\Release  -maxrregcount=0    --machine 64 --compile -cu
  dart static -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_
  52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -genco
  de arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -w --expt-relax
  ed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /wd4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DND
  EBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJP
  EG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL
  _FILE__=\"fastdeploy/vision/utils/yolo_preprocess.cu\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL
  -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -
  DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -Xcompiler "/EHsc /W0 /nolog
  o /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fastdeploy.dir\Release\yolo_preprocess.obj "
  I:\AI\FastDeploy-develop\fastdeploy\vision\utils\yolo_preprocess.cu"
  nvcc fatal   : Unsupported gpu architecture 'compute_35'
  nvcc fatal   : Unsupported gpu architecture 'compute_35'

  (FastDeploy) I:\AI\FastDeploy-develop\python\.setuptools-cmake-build>"C:\Program Files\NVIDIA GPU Computing Toolkit\C
  UDA\v12.8\bin\nvcc.exe"  --use-local-env -ccbin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSV
  C\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\AI\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build" -I"I:\AI\FastDeploy-develop\third_party\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12
  .8\include" -I"I:\AI\FastDeploy-develop\third_party\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build\third_libs\install\fast_tokenizer\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\thi
  rd_libs\install\fast_tokenizer\third_party\include" -ID:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Compu
  ting Toolkit\CUDA\v12.8\include"     --keep-dir fastdeploy\x64\Release  -maxrregcount=0    --machine 64 --compile -cu
  dart static -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_
  52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -genco
  de arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -w --expt-relax
  ed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /wd4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DND
  EBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJP
  EG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL
  _FILE__=\"fastdeploy/vision/common/processors/normalize.cu\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CP
  P_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VI
  SION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -Xcompiler "/EHsc /W0
  /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fastdeploy.dir\Release\/fastdeploy/vis
  ion/common/processors/normalize.cu.obj "I:\AI\FastDeploy-develop\fastdeploy\vision\common\processors\normalize.cu"

  (FastDeploy) I:\AI\FastDeploy-develop\python\.setuptools-cmake-build>"C:\Program Files\NVIDIA GPU Computing Toolkit\C
  UDA\v12.8\bin\nvcc.exe"  --use-local-env -ccbin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSV
  C\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\AI\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build" -I"I:\AI\FastDeploy-develop\third_party\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12
  .8\include" -I"I:\AI\FastDeploy-develop\third_party\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build\third_libs\install\fast_tokenizer\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\thi
  rd_libs\install\fast_tokenizer\third_party\include" -ID:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Compu
  ting Toolkit\CUDA\v12.8\include"     --keep-dir fastdeploy\x64\Release  -maxrregcount=0    --machine 64 --compile -cu
  dart static -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_
  52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -genco
  de arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -w --expt-relax
  ed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /wd4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DND
  EBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJP
  EG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL
  _FILE__=\"fastdeploy/vision/common/processors/normalize_and_permute.cu\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -DNDEB
  UG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG
   -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -Xcompiler
   "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fastdeploy.dir\Release\/fa
  stdeploy/vision/common/processors/normalize_and_permute.cu.obj "I:\AI\FastDeploy-develop\fastdeploy\vision\common\pro
  cessors\normalize_and_permute.cu"
  nvcc fatal   : Unsupported gpu architecture 'compute_35'
D:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.8.targets
(800,9): error MSB3721: 命令“"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe"  --use-local-env -cc
bin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\A
I\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build" -I"I:\AI\FastDeploy-develop\third_p
arty\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -I"I:\AI\FastDeploy-develop\third_part
y\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\includ
e" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\third_party\include" -I
D:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include"     --keep-dir fastdepl
oy\x64\Release  -maxrregcount=0    --machine 64 --compile -cudart static -gencode arch=compute_35,code=sm_35 -gencode a
rch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute
_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm
_80 -gencode arch=compute_86,code=sm_86 -w --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /w
d4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -D
EIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_
INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL_FILE__=\"fastdeploy/vision/utils/yolo_preprocess.cu\"" -D_WINDLL -D_M
BCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline
 -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfast
deploy_EXPORTS -Xcompiler "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fas
tdeploy.dir\Release\yolo_preprocess.obj "I:\AI\FastDeploy-develop\fastdeploy\vision\utils\yolo_preprocess.cu"”已退出,返回代码为
 1。 [I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\fastdeploy.vcxproj]
D:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.8.targets
(800,9): error MSB3721: 命令“"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe"  --use-local-env -cc
bin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\A
I\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build" -I"I:\AI\FastDeploy-develop\third_p
arty\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -I"I:\AI\FastDeploy-develop\third_part
y\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\includ
e" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\third_party\include" -I
D:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include"     --keep-dir fastdepl
oy\x64\Release  -maxrregcount=0    --machine 64 --compile -cudart static -gencode arch=compute_35,code=sm_35 -gencode a
rch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute
_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm
_80 -gencode arch=compute_86,code=sm_86 -w --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /w
d4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -D
EIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_
INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL_FILE__=\"fastdeploy/function/cuda_cast.cu\"" -D_WINDLL -D_MBCS -DWIN3
2 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GP
U -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXP
ORTS -Xcompiler "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fastdeploy.di
r\Release\cuda_cast.obj "I:\AI\FastDeploy-develop\fastdeploy\function\cuda_cast.cu"”已退出,返回代码为 1。 [I:\AI\FastDeploy-deve
lop\python\.setuptools-cmake-build\fastdeploy.vcxproj]
  nvcc fatal   : Unsupported gpu architecture 'compute_35'
D:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.8.targets
(800,9): error MSB3721: 命令“"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe"  --use-local-env -cc
bin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\A
I\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build" -I"I:\AI\FastDeploy-develop\third_p
arty\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -I"I:\AI\FastDeploy-develop\third_part
y\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\includ
e" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\third_party\include" -I
D:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include"     --keep-dir fastdepl
oy\x64\Release  -maxrregcount=0    --machine 64 --compile -cudart static -gencode arch=compute_35,code=sm_35 -gencode a
rch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute
_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm
_80 -gencode arch=compute_86,code=sm_86 -w --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /w
d4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -D
EIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_
INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL_FILE__=\"fastdeploy/vision/common/processors/normalize.cu\"" -D_WINDL
L -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=
inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\""
-Dfastdeploy_EXPORTS -Xcompiler "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb"
-o fastdeploy.dir\Release\/fastdeploy/vision/common/processors/normalize.cu.obj "I:\AI\FastDeploy-develop\fastdeploy\vi
sion\common\processors\normalize.cu"”已退出,返回代码为 1。 [I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\fastdeploy.v
cxproj]
D:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.8.targets
(800,9): error MSB3721: 命令“"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe"  --use-local-env -cc
bin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\A
I\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build" -I"I:\AI\FastDeploy-develop\third_p
arty\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -I"I:\AI\FastDeploy-develop\third_part
y\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\includ
e" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\third_party\include" -I
D:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include"     --keep-dir fastdepl
oy\x64\Release  -maxrregcount=0    --machine 64 --compile -cudart static -gencode arch=compute_35,code=sm_35 -gencode a
rch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute
_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm
_80 -gencode arch=compute_86,code=sm_86 -w --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /w
d4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -D
EIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_
INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL_FILE__=\"fastdeploy/vision/common/processors/normalize_and_permute.cu
\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_ST
RONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\
"Release\"" -Dfastdeploy_EXPORTS -Xcompiler "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release
\vc143.pdb" -o fastdeploy.dir\Release\/fastdeploy/vision/common/processors/normalize_and_permute.cu.obj "I:\AI\FastDepl
oy-develop\fastdeploy\vision\common\processors\normalize_and_permute.cu"”已退出,返回代码为 1。 [I:\AI\FastDeploy-develop\python\
.setuptools-cmake-build\fastdeploy.vcxproj]

  (FastDeploy) I:\AI\FastDeploy-develop\python\.setuptools-cmake-build>"C:\Program Files\NVIDIA GPU Computing Toolkit\C
  UDA\v12.8\bin\nvcc.exe"  --use-local-env -ccbin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSV
  C\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\AI\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build" -I"I:\AI\FastDeploy-develop\third_party\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12
  .8\include" -I"I:\AI\FastDeploy-develop\third_party\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-
  cmake-build\third_libs\install\fast_tokenizer\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\thi
  rd_libs\install\fast_tokenizer\third_party\include" -ID:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Compu
  ting Toolkit\CUDA\v12.8\include"     --keep-dir fastdeploy\x64\Release  -maxrregcount=0    --machine 64 --compile -cu
  dart static -gencode arch=compute_35,code=sm_35 -gencode arch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_
  52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -genco
  de arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm_80 -gencode arch=compute_86,code=sm_86 -w --expt-relax
  ed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /wd4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DND
  EBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJP
  EG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL
  _FILE__=\"fastdeploy/runtime/backends/common/cuda/adaptive_pool2d_kernel.cu\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -
  DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_N
  VJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -Xcom
  piler "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Release\vc143.pdb" -o fastdeploy.dir\Releas
  e\adaptive_pool2d_kernel.obj "I:\AI\FastDeploy-develop\fastdeploy\runtime\backends\common\cuda\adaptive_pool2d_kernel
  .cu"
  nvcc fatal   : Unsupported gpu architecture 'compute_35'
D:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170\BuildCustomizations\CUDA 12.8.targets
(800,9): error MSB3721: 命令“"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\bin\nvcc.exe"  --use-local-env -cc
bin "D:\Program Files\Microsoft Visual Studio\2022\Community\VC\Tools\MSVC\14.43.34808\bin\HostX64\x64" -x cu   -I"I:\A
I\FastDeploy-develop\." -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build" -I"I:\AI\FastDeploy-develop\third_p
arty\eigen" -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include" -I"I:\AI\FastDeploy-develop\third_part
y\yaml-cpp\include" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\includ
e" -I"I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\third_libs\install\fast_tokenizer\third_party\include" -I
D:\AI\opencv\build\include -I"C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.8\include"     --keep-dir fastdepl
oy\x64\Release  -maxrregcount=0    --machine 64 --compile -cudart static -gencode arch=compute_35,code=sm_35 -gencode a
rch=compute_50,code=sm_50 -gencode arch=compute_52,code=sm_52 -gencode arch=compute_60,code=sm_60 -gencode arch=compute
_61,code=sm_61 -gencode arch=compute_70,code=sm_70 -gencode arch=compute_75,code=sm_75 -gencode arch=compute_80,code=sm
_80 -gencode arch=compute_86,code=sm_86 -w --expt-relaxed-constexpr --expt-extended-lambda -Xcompiler="/EHsc /wd4244 /w
d4267 /wd4819 /bigobj -Ob2 /wd4251"   -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -D
EIGEN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_
INTDIR=\"Release\"" -Dfastdeploy_EXPORTS -D"__REL_FILE__=\"fastdeploy/runtime/backends/common/cuda/adaptive_pool2d_kern
el.cu\"" -D_WINDLL -D_MBCS -DWIN32 -D_WINDOWS -DNDEBUG -DYAML_CPP_DLL -DFASTDEPLOY_LIB -DCMAKE_BUILD_TYPE=Release -DEIG
EN_STRONG_INLINE=inline -DWITH_GPU -DENABLE_NVJPEG -DENABLE_VISION -DENABLE_TEXT -DBUILD_FASTDEPLOY_PYTHON -D"CMAKE_INT
DIR=\"Release\"" -Dfastdeploy_EXPORTS -Xcompiler "/EHsc /W0 /nologo /O2 /FS   /MD /GR" -Xcompiler "/Fdfastdeploy.dir\Re
lease\vc143.pdb" -o fastdeploy.dir\Release\adaptive_pool2d_kernel.obj "I:\AI\FastDeploy-develop\fastdeploy\runtime\back
ends\common\cuda\adaptive_pool2d_kernel.cu"”已退出,返回代码为 1。 [I:\AI\FastDeploy-develop\python\.setuptools-cmake-build\fastd
eploy.vcxproj]
Traceback (most recent call last):
  File "I:\AI\FastDeploy-develop\python\setup.py", line 445, in <module>
    setuptools.setup(
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\__init__.py", line 117, in setup
    return distutils.core.setup(**attrs)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\core.py", line 186, in setup
    return run_commands(dist)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\core.py", line 202, in run_commands
    dist.run_commands()
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\dist.py", line 983, in run_commands
    self.run_command(cmd)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\dist.py", line 999, in run_command
    super().run_command(command)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\dist.py", line 1002, in run_command
    cmd_obj.run()
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\command\build.py", line 136, in run
    self.run_command(cmd_name)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\cmd.py", line 339, in run_command
    self.distribution.run_command(command)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\dist.py", line 999, in run_command
    super().run_command(command)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\dist.py", line 1002, in run_command
    cmd_obj.run()
  File "I:\AI\FastDeploy-develop\python\setup.py", line 308, in run
    self.run_command('cmake_build')
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\cmd.py", line 339, in run_command
    self.distribution.run_command(command)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\dist.py", line 999, in run_command
    super().run_command(command)
  File "D:\anaconda3\envs\FastDeploy\lib\site-packages\setuptools\_distutils\dist.py", line 1002, in run_command
    cmd_obj.run()
  File "I:\AI\FastDeploy-develop\python\setup.py", line 302, in run
    subprocess.check_call(build_args)
  File "D:\anaconda3\envs\FastDeploy\lib\subprocess.py", line 369, in check_call
    raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['D:\\Program Files\\CMake\\bin\\cmake.exe', '--build', '.', '--config', 'Release', '--', '/maxcpucount:24']' returned non-zero exit status 1.

monkeycc avatar Apr 22 '25 06:04 monkeycc

把这里改一改,复制进去问AI。因为你的显卡太猛cuda12.8不支持一些3.5算力的卡了

Image

ChaoII avatar Apr 25 '25 02:04 ChaoII

https://github.com/PaddlePaddle/FastDeploy/issues/2024 这里有类似的解决方案

askxiaozhang avatar May 07 '25 06:05 askxiaozhang

什么时候官方能更新一下版本

monkeycc avatar May 14 '25 06:05 monkeycc

建议官方更新一版cuda12的serving镜像

impl1874 avatar Jun 07 '25 10:06 impl1874

我的电脑是5070显卡,驱动版本是576.52、cuda是12.9,在win11上使用wsl2+docker尝试完成服务化镜像的编译工作,我参考这篇文章:服务化部署编译

当我执行到这条命令时,我发现FastDeploy依赖的TensorRT最高支持到8.5.2.2,这个库最低支持50系显卡的版本是10.8.0,这意味着必须要有一个新的分支来适应TensorRT-10.8.0,或者英伟达发布一版8.x支持50系显卡的TensorRT,当然后者好像不太可能。

我现在想了解关于TensorRT新版本变化是否有支持计划,另外我现在能做些什么?

cmake .. -DENABLE_TRT_BACKEND=ON -DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy_install -DWITH_GPU=ON -DTRT_DIRECTORY=/workspace/fastdeploy/serving/TensorRT-${trt_version}/ -DENABLE_PADDLE_BACKEND=ON -DENABLE_ORT_BACKEND=ON -DENABLE_OPENVINO_BACKEND=ON -DENABLE_VISION=OFF -DBUILD_FASTDEPLOY_PYTHON=OFF -DENABLE_PADDLE2ONNX=ON -DENABLE_TEXT=OFF -DLIBRARY_NAME=fastdeploy_runtime;     
build.sh

主要修改了各组件版本信息,另外还在setup.py中指定了c++编译版本:17

setup_configs["CMAKE_CXX_STANDARD"] = os.getenv("CMAKE_CXX_STANDARD", "17")

build.sh

#!/usr/bin/env bash
# Copyright (c) 2022 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

ARGS=`getopt -a -o w:n:h:hs:tv -l WITH_GPU:,docker_name:,http_proxy:,https_proxy:,trt_version: -- "$@"`

eval set -- "${ARGS}"
echo "parse start"

while true
do
        case "$1" in
        -w|--WITH_GPU)
                WITH_GPU="$2"
                shift;;
        -n|--docker_name)
                docker_name="$2"
                shift;;
        -h|--http_proxy)
                http_proxy="$2"
                shift;;
        -hs|--https_proxy)
                https_proxy="$2"
                shift;;
        -tv|--trt_version)
                trt_version="$2"
                shift;;
        --)
                shift
                break;;
        esac
shift
done

if [ -z $WITH_GPU ];then
    WITH_GPU="ON"
fi

if [ -z $docker_name ];then
    docker_name="build_fd"
fi

if [ $WITH_GPU == "ON" ]; then

if [ -z $trt_version ]; then
    trt_version="10.11.0.33"
    trt_version_group="10.11.0"
fi

if [ $trt_version == "10.11.0.33" ]
then
    cuda_version="12.9"
    cudnn_version="9.10.2"
else
    cuda_version="11.6"
    cudnn_version="8.4"
fi

echo "start build FD GPU library"

if [ ! -d "./cmake-3.18.6-Linux-x86_64/" ]; then
    wget https://github.com/Kitware/CMake/releases/download/v3.18.6/cmake-3.18.6-Linux-x86_64.tar.gz
    tar -zxvf cmake-3.18.6-Linux-x86_64.tar.gz
    rm -rf cmake-3.18.6-Linux-x86_64.tar.gz
fi

if [ ! -d "./TensorRT-${trt_version}/" ]; then
    wget https://developer.nvidia.com/downloads/compute/machine-learning/tensorrt/${trt_version_group}/tars/TensorRT-${trt_version}.Linux.x86_64-gnu.cuda-${cuda_version}.tar.gz
    tar -zxvf TensorRT-${trt_version}.Linux.x86_64-gnu.cuda-${cuda_version}.tar.gz
    rm -rf TensorRT-${trt_version}.tar.gz
fi

docker run -i --rm --name ${docker_name} \
           -v`pwd`/..:/workspace/fastdeploy \
           -e "http_proxy=${http_proxy}" \
           -e "https_proxy=${https_proxy}" \
           -e "trt_version=${trt_version}"\
           nvidia/cuda:12.9.0-cudnn-devel-ubuntu20.04  \
           bash -c \
           'export https_proxy_tmp=${https_proxy}
            export http_proxy_tmp=${http_proxy}
            cd /workspace/fastdeploy/python;
            rm -rf .setuptools-cmake-build dist build fastdeploy/libs/third_libs;
            apt-get update;
            apt-get install -y --no-install-recommends patchelf python3-dev python3-pip rapidjson-dev git;
            unset http_proxy
            unset https_proxy
            ln -s /usr/bin/python3 /usr/bin/python;
            export PATH=/workspace/fastdeploy/serving/cmake-3.18.6-Linux-x86_64/bin:$PATH;
            export WITH_GPU=ON;
            export ENABLE_TRT_BACKEND=OFF;
            export TRT_DIRECTORY=/workspace/fastdeploy/serving/TensorRT-${trt_version}/;
            export ENABLE_ORT_BACKEND=OFF;
            export ENABLE_PADDLE_BACKEND=OFF;
            export ENABLE_OPENVINO_BACKEND=OFF;
            export ENABLE_VISION=ON;
            export ENABLE_TEXT=ON;
            python setup.py build;
            python setup.py bdist_wheel;
            cd /workspace/fastdeploy;
            rm -rf build; mkdir -p build;cd build;
            cmake .. -DENABLE_TRT_BACKEND=ON -DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy_install -DWITH_GPU=ON -DTRT_DIRECTORY=/workspace/fastdeploy/serving/TensorRT-${trt_version}/ -DENABLE_PADDLE_BACKEND=ON -DENABLE_ORT_BACKEND=ON -DENABLE_OPENVINO_BACKEND=ON -DENABLE_VISION=OFF -DBUILD_FASTDEPLOY_PYTHON=OFF -DENABLE_PADDLE2ONNX=ON -DENABLE_TEXT=OFF -DLIBRARY_NAME=fastdeploy_runtime;
            make -j`nproc`;
            make install;
            cd /workspace/fastdeploy/serving;
            rm -rf build; mkdir build; cd build;
            export https_proxy=${https_proxy_tmp}
            export http_proxy=${http_proxy_tmp}
            cmake .. -DFASTDEPLOY_DIR=/workspace/fastdeploy/build/fastdeploy_install -DTRITON_COMMON_REPO_TAG=r21.10 -DTRITON_CORE_REPO_TAG=r21.10 -DTRITON_BACKEND_REPO_TAG=r21.10;
            make -j`nproc`'

echo "build FD GPU library done"

else

echo "start build FD CPU library"

docker run -i --rm --name ${docker_name} \
           -v`pwd`/..:/workspace/fastdeploy \
           -e "http_proxy=${http_proxy}" \
           -e "https_proxy=${https_proxy}" \
           paddlepaddle/fastdeploy:21.10-cpu-only-buildbase \
           bash -c \
           'export https_proxy_tmp=${https_proxy}
            export http_proxy_tmp=${http_proxy}
            cd /workspace/fastdeploy/python;
            rm -rf .setuptools-cmake-build dist build fastdeploy/libs/third_libs;
            ln -s /usr/bin/python3 /usr/bin/python;
            export WITH_GPU=OFF;
            export ENABLE_ORT_BACKEND=OFF;
            export ENABLE_PADDLE_BACKEND=OFF;
            export ENABLE_OPENVINO_BACKEND=OFF;
            export ENABLE_VISION=ON;
            export ENABLE_TEXT=ON;
            unset http_proxy
            unset https_proxy
            python setup.py build;
            python setup.py bdist_wheel;
            cd /workspace/fastdeploy;
            rm -rf build; mkdir build; cd build;
            cmake .. -DENABLE_TRT_BACKEND=OFF -DCMAKE_INSTALL_PREFIX=${PWD}/fastdeploy_install -DWITH_GPU=OFF -DENABLE_PADDLE_BACKEND=ON -DENABLE_ORT_BACKEND=ON -DENABLE_OPENVINO_BACKEND=ON -DENABLE_VISION=OFF -DBUILD_FASTDEPLOY_PYTHON=OFF -DENABLE_PADDLE2ONNX=ON -DENABLE_TEXT=OFF -DLIBRARY_NAME=fastdeploy_runtime;
            make -j`nproc`;
            make install;
            cd /workspace/fastdeploy/serving;
            rm -rf build; mkdir build; cd build;
            export https_proxy=${https_proxy_tmp}
            export http_proxy=${http_proxy_tmp}
            cmake .. -DTRITON_ENABLE_GPU=OFF -DFASTDEPLOY_DIR=/workspace/fastdeploy/build/fastdeploy_install -DTRITON_COMMON_REPO_TAG=r21.10 -DTRITON_CORE_REPO_TAG=r21.10 -DTRITON_BACKEND_REPO_TAG=r21.10;
            make -j`nproc`'

echo "build FD CPU library done"

fi
cmake

CMakeLists.txt中增加了以下内容:设置c++编译版本、设置单线程编译

set(CMAKE_CXX_STANDARD 17)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_BUILD_PARALLEL_LEVEL 1)

编译错误 TensorRT-10.11.0.33:

/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'bool fastdeploy::TrtBackend::LoadTrtCache(const string&)':
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:90:36: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getNbBindings'
   90 |   for (int32_t i = 0; i < engine_->getNbBindings(); ++i) {
      |                                    ^~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:91:19: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'bindingIsInput'
   91 |     if (!engine_->bindingIsInput(i)) {
      |                   ^~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:94:31: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getProfileDimensions'    
   94 |     auto min = ToVec(engine_->getProfileDimensions(
      |                               ^~~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:96:31: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getProfileDimensions'    
   96 |     auto max = ToVec(engine_->getProfileDimensions(
      |                               ^~~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:98:38: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getBindingName'
   98 |     auto name = std::string(engine_->getBindingName(i));
      |                                      ^~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'virtual bool fastdeploy::TrtBackend::Infer(std::vector<fastdeploy::FDTensor>&, std::vector<fastdeploy::FDTensor>*, bool)':    
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:347:18: error: 'using element_type = class nvinfer1::IExecutionContext' {aka 'class nvinfer1::IExecutionContext'} has no member named 'enqueueV2'; did you mean 'enqueueV3'?
  347 |   if (!context_->enqueueV2(bindings_.data(), stream_, nullptr)) {
      |                  ^~~~~~~~~
      |                  enqueueV3
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'void fastdeploy::TrtBackend::GetInputOutputInfo()':
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:416:29: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getNbBindings'
  416 |   auto num_binds = engine_->getNbBindings();
      |                             ^~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:418:45: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getBindingName'
  418 |     std::string name = std::string(engine_->getBindingName(i));
      |                                             ^~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:419:33: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getBindingDimensions'   
  419 |     auto shape = ToVec(engine_->getBindingDimensions(i));
      |                                 ^~~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:420:27: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'getBindingDataType'; did you mean 'getTensorDataType'?
  420 |     auto dtype = engine_->getBindingDataType(i);
      |                           ^~~~~~~~~~~~~~~~~~
      |                           getTensorDataType
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:421:18: error: 'using element_type = class nvinfer1::ICudaEngine' {aka 'class nvinfer1::ICudaEngine'} has no member named 'bindingIsInput'
  421 |     if (engine_->bindingIsInput(i)) {
      |                  ^~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'void fastdeploy::TrtBackend::SetInputs(const std::vector<fastdeploy::FDTensor>&)':
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:451:15: error: 'using element_type = class nvinfer1::IExecutionContext' {aka 'class nvinfer1::IExecutionContext'} has no member named 'setBindingDimensions'
  451 |     context_->setBindingDimensions(idx, dims);
      |               ^~~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'void fastdeploy::TrtBackend::AllocateOutputsBuffer(std::vector<fastdeploy::FDTensor>*, bool)':
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:518:34: error: 'using element_type = class nvinfer1::IExecutionContext' {aka 'class nvinfer1::IExecutionContext'} has no member named 'getBindingDimensions'
  518 |     auto output_dims = context_->getBindingDimensions(idx);
      |                                  ^~~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'bool fastdeploy::TrtBackend::BuildTrtEngine()':
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:562:40: warning: 'bool nvinfer1::IBuilder::platformHasFastFp16() const' is deprecated [-Wdeprecated-declarations]
  562 |     if (!builder_->platformHasFastFp16()) {
      |                                        ^
In file included from /workspace/fastdeploy/./fastdeploy/runtime/backends/tensorrt/trt_backend.h:24,
                 from /workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:15:
/workspace/fastdeploy/serving/TensorRT-10.11.0.33/include/NvInfer.h:10424:25: note: declared here
10424 |     TRT_DEPRECATED bool platformHasFastFp16() const noexcept
      |                         ^~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:579:15: error: 'class nvinfer1::IBuilder' has no member named 'setMaxBatchSize'; did you mean 'getMaxDLABatchSize'?
  579 |     builder_->setMaxBatchSize(option_.max_batch_size);
      |               ^~~~~~~~~~~~~~~
      |               getMaxDLABatchSize
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:581:11: error: 'class nvinfer1::IBuilderConfig' has no member named 'setMaxWorkspaceSize'
  581 |   config->setMaxWorkspaceSize(option_.max_workspace_size);
      |           ^~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:620:40: warning: 'bool nvinfer1::IBuilder::platformHasFastInt8() const' is deprecated [-Wdeprecated-declarations]
  620 |     if (!builder_->platformHasFastInt8()) {
      |                                        ^
In file included from /workspace/fastdeploy/./fastdeploy/runtime/backends/tensorrt/trt_backend.h:24,
                 from /workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:15:
/workspace/fastdeploy/serving/TensorRT-10.11.0.33/include/NvInfer.h:10434:25: note: declared here
10434 |     TRT_DEPRECATED bool platformHasFastInt8() const noexcept
      |                         ^~~~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:629:43: warning: 'void nvinfer1::IBuilderConfig::setInt8Calibrator(nvinfer1::IInt8Calibrator*)' is deprecated [-Wdeprecated-declarations]
  629 |       config->setInt8Calibrator(calibrator);
      |                                           ^
In file included from /workspace/fastdeploy/./fastdeploy/runtime/backends/tensorrt/trt_backend.h:24,
                 from /workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:15:
/workspace/fastdeploy/serving/TensorRT-10.11.0.33/include/NvInfer.h:9493:25: note: declared here
 9493 |     TRT_DEPRECATED void setInt8Calibrator(IInt8Calibrator* calibrator) noexcept
      |                         ^~~~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc: In member function 'bool fastdeploy::TrtBackend::CreateTrtEngineFromOnnx(const string&)':
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:684:52: warning: 'nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH' is deprecated [-Wdeprecated-declarations]
  684 |           nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
      |                                                    ^~~~~~~~~~~~~~~
In file included from /workspace/fastdeploy/./fastdeploy/runtime/backends/tensorrt/trt_backend.h:24,
                 from /workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:15:
/workspace/fastdeploy/serving/TensorRT-10.11.0.33/include/NvInfer.h:10378:5: note: declared here
10378 |     kEXPLICIT_BATCH TRT_DEPRECATED_ENUM = 0,
      |     ^~~~~~~~~~~~~~~
/workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:684:52: warning: 'nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH' is deprecated [-Wdeprecated-declarations]
  684 |           nvinfer1::NetworkDefinitionCreationFlag::kEXPLICIT_BATCH);
      |                                                    ^~~~~~~~~~~~~~~
In file included from /workspace/fastdeploy/./fastdeploy/runtime/backends/tensorrt/trt_backend.h:24,
                 from /workspace/fastdeploy/fastdeploy/runtime/backends/tensorrt/trt_backend.cc:15:
/workspace/fastdeploy/serving/TensorRT-10.11.0.33/include/NvInfer.h:10378:5: note: declared here
10378 |     kEXPLICIT_BATCH TRT_DEPRECATED_ENUM = 0,
      |     ^~~~~~~~~~~~~~~
make[2]: *** [CMakeFiles/fastdeploy_runtime.dir/build.make:713: CMakeFiles/fastdeploy_runtime.dir/fastdeploy/runtime/backends/tensorrt/trt_backend.cc.o] Error 1
make[2]: *** Waiting for unfinished jobs....
make[1]: *** [CMakeFiles/Makefile2:161: CMakeFiles/fastdeploy_runtime.dir/all] Error 2
make: *** [Makefile:152: all] Error 2

impl1874 avatar Jun 11 '25 11:06 impl1874