TensorRT-LLM
TensorRT-LLM copied to clipboard
The installation of tensorrt-llm for version 0.11.0 failed
System Info
- TensorRT-LLM version: 0.11.0
- Python Version: CPython 3.12.3
- Operating System: Linux 6.8.0-1012-aws
- CPU Architecture: x86_64
- Driver Version: 560.35
- CUDA Version: 12.6
Who can help?
No response
Information
- [x] The official example scripts
- [ ] My own modified scripts
Tasks
- [x] An officially supported task in the
examples
folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below)
Reproduction
In a new virtual environment, run pip install --extra-index-url https://pypi.nvidia.com tensorrt-llm==0.11.0
. This should be supported for the Llama example: https://github.com/NVIDIA/TensorRT-LLM/blob/05316d3313360012536ace46c781518f5afae75e/examples/llama/requirements.txt.
Expected behavior
Install succeeds
actual behavior
Installation fails with the following output:
(.venv) ubuntu$ pip install --extra-index-url https://pypi.nvidia.com tensorrt-llm==0.11.0
Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com
Collecting tensorrt-llm==0.11.0
Using cached tensorrt_llm-0.11.0.tar.gz (668 bytes)
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [41 lines of output]
Traceback (most recent call last):
File "/srv/lithos/mlos/.venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File "/srv/lithos/mlos/.venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/srv/lithos/mlos/.venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 152, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-emt_26sd/overlay/lib/python3.12/site-packages/nvidia_stub/buildapi.py", line 29, in build_wheel
return download_wheel(pathlib.Path(wheel_directory), config_settings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/pip-build-env-emt_26sd/overlay/lib/python3.12/site-packages/nvidia_stub/wheel.py", line 175, in download_wheel
report_install_failure(distribution, version, None)
File "/tmp/pip-build-env-emt_26sd/overlay/lib/python3.12/site-packages/nvidia_stub/error.py", line 63, in report_install_failure
raise InstallFailedError(
nvidia_stub.error.InstallFailedError:
*******************************************************************************
The installation of tensorrt-llm for version 0.11.0 failed.
This is a special placeholder package which downloads a real wheel package
from https://pypi.nvidia.com. If https://pypi.nvidia.com is not reachable, we
cannot download the real wheel file to install.
You might try installing this package via
```
$ pip install --extra-index-url https://pypi.nvidia.com tensorrt-llm
```
Here is some debug information about your platform to include in any bug
report:
Python Version: CPython 3.12.3
Operating System: Linux 6.8.0-1012-aws
CPU Architecture: x86_64
Driver Version: 560.35
CUDA Version: 12.6
*******************************************************************************
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
(.venv) ubuntu$
additional notes
Looks similar to #1362