TensorRT-LLM icon indicating copy to clipboard operation
TensorRT-LLM copied to clipboard

Quantization guidelines broken

Open dhruvmullick opened this issue 1 year ago • 3 comments

System Info

Python: 3.10.12 Container: nvcr.io/nvidia/tritonserver:24.06-trtllm-python-py3 TensorRT-LLM: 0.10

Who can help?

No response

Information

  • [X] The official example scripts
  • [ ] My own modified scripts

Tasks

  • [ ] An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • [ ] My own task or dataset (give details below)

Reproduction

Follow the steps given in https://github.com/NVIDIA/TensorRT-LLM/tree/main/examples/quantization

On executing: pip install --no-cache-dir --extra-index-url https://pypi.nvidia.com nvidia-modelopt==0.9.3

We get a failure since the nvidia-modelopt==0.9.3 doesn't exist anymore on https://pypi.org/project/nvidia-modelopt/0.13.0/#history

Moreover, the install_requirements.sh file isn't used anywhere in the Readme.md. Should it be removed?

Expected behavior

Steps given in the Readme.md should work

actual behavior

Pip install failure since package doesn't exist anymore

additional notes

N/A

dhruvmullick avatar Jul 04 '24 17:07 dhruvmullick

@Tracin Could you please take a look? Thanks

QiJune avatar Jul 05 '24 01:07 QiJune

Yeah, Modelopt will be installed from https://github.com/NVIDIA/TensorRT-LLM/blob/main/requirements.txt#L24 So, no need to install it separately. I will remove it.

Tracin avatar Jul 05 '24 02:07 Tracin

Just a comment. I had to install setuptools to the image otherwise I got the following:

 docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 -it --rm nvcr.io/nvidia/tritonserver:24.07-trtllm-python-py3  python3 -c "from modelopt.torch.export import export_tensorrt_llm_checkpoint"


=============================
== Triton Inference Server ==
=============================

NVIDIA Release 24.07 (build 102761898)
Triton Server Version 2.48.0

Copyright (c) 2018-2024, NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES.  All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License.
By pulling and using the container, you accept the terms and conditions of this license:
https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

NOTE: CUDA Forward Compatibility mode ENABLED.
  Using CUDA 12.4 driver version 550.54.15 with kernel driver version 535.183.01.
  See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/modelopt/torch/__init__.py", line 13, in <module>
    from . import opt, quantization, sparsity, utils  # noqa: E402
  File "/usr/local/lib/python3.10/dist-packages/modelopt/torch/opt/__init__.py", line 30, in <module>
    from . import plugins, utils
  File "/usr/local/lib/python3.10/dist-packages/modelopt/torch/opt/utils.py", line 17, in <module>
    from modelopt.torch.utils import unwrap_model
  File "/usr/local/lib/python3.10/dist-packages/modelopt/torch/utils/__init__.py", line 13, in <module>
    from .cpp_extension import *
  File "/usr/local/lib/python3.10/dist-packages/modelopt/torch/utils/cpp_extension.py", line 21, in <module>
    from torch.utils.cpp_extension import load
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/cpp_extension.py", line 9, in <module>
    import setuptools
ModuleNotFoundError: No module named 'setuptools'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "/usr/local/lib/python3.10/dist-packages/modelopt/torch/__init__.py", line 15, in <module>
    raise ImportError("Please install optional ``[torch]`` dependencies.") from e
ImportError: Please install optional ``[torch]`` dependencies.

KeitaW avatar Aug 02 '24 00:08 KeitaW

@KeitaW Are you still getting that error? Or I will close this one.

Tracin avatar Nov 14 '24 06:11 Tracin

pip install "nvidia-modelopt[torch]" "tensorrt~=10.8.0" --extra-index-url https://pypi.nvidia.com Looking in indexes: https://pypi.org/simple, https://pypi.nvidia.com Collecting tensorrt~=10.8.0 Downloading https://pypi.nvidia.com/tensorrt/tensorrt-10.8.0.43.tar.gz (35 kB) Preparing metadata (setup.py) ... done ERROR: Could not find a version that satisfies the requirement nvidia-modelopt[torch] (from versions: none) ERROR: No matching distribution found for nvidia-modelopt[torch]

raulgupta avatar Apr 14 '25 01:04 raulgupta