onnxruntime_backend
onnxruntime_backend copied to clipboard
TRT Engine Cache Regeneration Issue
Is your feature request related to a problem? Please describe.
TRT cache gets regenerated whenever the model path changes. This is an issue when model file override is used. There has been many similar feature requests:
https://github.com/triton-inference-server/server/issues/4587 https://github.com/triton-inference-server/onnxruntime_backend/pull/126#issuecomment-1237936727
The problem is that it seems like ORT internally uses model path as the key to the cache if it exists:
https://github.com/microsoft/onnxruntime/blob/a433f22f17e59671ff01acf0d270b7e3476a952a/onnxruntime/core/framework/execution_provider.cc#L147-L148
If the path changes but the same model is used, this will result in the cache to get regenerated.
Describe the solution you'd like
There could be two solutions to this issue:
-
Always use the binary stream in ORT as the key to find the TRT cache. This change would not require any changes in the ORT backend.
-
Add an option named "ONNXRUNTIME_LOAD_MODEL_FROM_PATH" to the ONNXRuntime backend. This would provide an opt-in option to whether the user wants to use binary mode or load the model from path. If the user wants to make sure the TRT engine cache is used properly, they would need to set this option to "off". Always loading the models from binary doesn't work since it breaks the models that require external weight files. In this mode, the user still would not be able to use TRT cache if the model requires external weight files.
CC @GuanLuo @tanmayv25 @dzier
@pranavsharma @askhade ^^^
@jywu-msft is working on a fix for this.
@jywu-msft @pranavsharma Is this issue resolved by the linked issue #13015? I think we must add some testing in the qa directory too.
I believe this could also be solved by https://github.com/microsoft/onnxruntime/pull/18217