Support Loop Translation to TensorRT
In order to fully support RNNs, we want to be able to make recurrent subgraphs to TRT. To do so we must expand the capability of the compiler to recognize these cases and generate a TensorRT Loop.
-
Relevant TRT Documentation:
- https://docs.nvidia.com/deeplearning/tensorrt/developer-guide/index.html#define-loops
- https://docs.nvidia.com/deeplearning/tensorrt/api/c_api/classnvinfer1_1_1_i_loop.html
-
Relevant PyTorch Documentation: https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/OVERVIEW.md#loops
Some more issues we can address with this issue:
-
Can we expand loop translation when unrolling the loop (see Lowering pass)?
-
Can we backtrace through the graph and freeze and frozen operators?
To add on, we want to be able to handle types like int inside the loop block at runtime. This will require probably doing things like mapping functions that operate on identified recurrence objects to a custom namespace tensor instead of int instead of standard aten to engage a converter instead of a evaluator and at conversion time freezing external dependencies in the TRT graph.
I'm currently receiving this message when converting my model with TRTorch TRTorch currently can only compile loops that are evaluatable at conversion time. What is the current timeline to add loop translation support for dynamic loop bodies? It's the main functionality missing in order for TRTorch to support my model.
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
@evanbro We are landing partial compilation in TRTorch, this might help you run your model. Right now there is still no great solution for doing translation of dynamic loops into TensorRT directly.
@evanbro We are landing partial compilation in TRTorch, this might help you run your model. Right now there is still no great solution for doing translation of dynamic loops into TensorRT directly.
Can you share more about partial compilation? Is there a PR or issue I can refer to?
You can enable it with the "torch_fallback" option. https://github.com/NVIDIA/TRTorch/pull/406
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
@inocsin can you please confirm if #617 resolves this issue?
This issue has not seen activity for 90 days, Remove stale label or comment or this will be closed in 10 days
@inocsin confirmed offline. Closing.