🐛 [Bug] Engine cache failed on torch.compile backend=tensorrt
Bug Description
engine cache test failed:
FAILED models/test_engine_cache.py::TestEngineCache::test_dynamo_compile_with_custom_engine_cache FAILED models/test_engine_cache.py::TestEngineCache::test_torch_compile_graph_break FAILED models/test_engine_cache.py::TestEngineCache::test_torch_compile_with_custom_engine_cache
https://gitlab-master.nvidia.com/dl/dgx/pytorch/-/jobs/234818816 https://gitlab-master.nvidia.com/dl/dgx/pytorch/-/jobs/234818814
To Reproduce
It can be reproduced in local Linux work station
Expected behavior
Environment
Build information about Torch-TensorRT can be found by turning on debug messages
- Torch-TensorRT Version (e.g. 1.0.0):
- PyTorch Version (e.g. 1.0):
- CPU Architecture:
- OS (e.g., Linux):
- How you installed PyTorch (
conda,pip,libtorch, source): - Build command you used (if compiling from source):
- Are you using local sources or building from archives:
- Python version:
- CUDA version:
- GPU models and configuration:
- Any other relevant information:
Additional context
https://github.com/pytorch/TensorRT/pull/3915
Hi @lanluo-nvidia, this PR #3915 can be closed because I improved and refactored engine caching in #3932