TensorRT
TensorRT copied to clipboard
🐛 [Bug] Remove prepare_inputs and passing inputs during compilation.
Bug Description
Currently, we pass trt_arg_inputs and trt_kwarg_inputs to compile_module https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/_compiler.py#L682. These are actually not being used. The prepare inputs call also fails sometimes during graph parsing for dry run. Since we read all the input info from graph metadata now, we can consider removing user inputs being passed around internally.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0):
- PyTorch Version (e.g. 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (
conda,pip,libtorch, source): - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information: