TensorRT icon indicating copy to clipboard operation
TensorRT copied to clipboard

🐛 [Bug] TensorRT-RTX: need to remove timing cache

Open lanluo-nvidia opened this issue 4 months ago • 0 comments

Bug Description

Timing cache is used during build time for storing autotuning results in TRT Enterprise. As TRT-RTX does not use autotuning, [@Hongyu Miao (HONGYUM)] removed the timing cache API from TRT-RTX.

Runtime cache is used in inference time to store and prevent repeated JIT compilation of kernels/graphs. This is a separate API. Thanks!

To Reproduce

Steps to reproduce the behavior:

Expected behavior

Environment

Build information about Torch-TensorRT can be found by turning on debug messages

  • Torch-TensorRT Version (e.g. 1.0.0):
  • PyTorch Version (e.g. 1.0):
  • CPU Architecture:
  • OS (e.g., Linux):
  • How you installed PyTorch (conda, pip, libtorch, source):
  • Build command you used (if compiling from source):
  • Are you using local sources or building from archives:
  • Python version:
  • CUDA version:
  • GPU models and configuration:
  • Any other relevant information:

Additional context

lanluo-nvidia avatar Sep 14 '25 21:09 lanluo-nvidia