TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

Cannot install TensorRT-LLM on Windows - No CUDA compiler found

Open sam-india-007 opened this issue 1 year ago • 2 comments

System Info

  • main branch of trtllm
  • windows 11, bare metal build from source

Who can help?

@byshiue

Information

  • [X] The official example scripts
  • [ ] My own modified scripts

Tasks

  • [X] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [ ] My own task or dataset (give details below)

Reproduction

I followed along the steps in the windows README for bare-metal installation on Windows

Expected behavior

Build successful, wheel built

actual behavior

-- The CXX compiler identification is MSVC 19.38.33135.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.38.33130/bin/Hostx86/x86/cl.exe - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- NVTX is disabled -- Importing batch manager -- Building PyTorch -- Building Google tests -- Building benchmarks -- Looking for a CUDA compiler -- Looking for a CUDA compiler - NOTFOUND CMake Error at CMakeLists.txt:118 (message): No CUDA compiler found

-- Configuring incomplete, errors occurred! Traceback (most recent call last): File "C:\TensorRT-LLM-Win\scripts\build_wheel.py", line 310, in main(**vars(args)) File "C:\TensorRT-LLM-Win\scripts\build_wheel.py", line 162, in main build_run( File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 526, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command 'cmake -DCMAKE_BUILD_TYPE="Release" -DBUILD_PYT="ON" -DBUILD_PYBIND="ON" "-DCMAKE_CUDA_ARCHITECTURES=89-real" "-DENABLE_MULTI_DEVICE=0" -DTRT_LIB_DIR=C:/TensorRT-9.2.0.5/lib -DTRT_INCLUDE_DIR=C:/TensorRT-9.2.0.5/include -GNinja -S "C:\TensorRT-LLM-Win\cpp"' returned non-zero exit status 1.

additional notes

I did try setting CUDACXX and PATH but no combination seems to work

sam-india-007 avatar Feb 06 '24 11:02 sam-india-007

checked with the rel branch as well, same issue

sam-india-007 avatar Feb 06 '24 11:02 sam-india-007

Run the setup script provided. The script you are looking for is under the windows folder. You can skip Python and MPI if they are already correctly installed. Make sure to run powershell as administrator. The script will install CUDA 12.2 and add it to path. After installation, kill the current terminal and open a new one. See below.

image

MustaphaU avatar Feb 08 '24 02:02 MustaphaU