Hemant Jain
Hemant Jain
@rarzumanyan on second thought your arguments for not contaminating the namespace / enum are valid. You can just make the new changes I suggested for handling the input_values field specially...
@aras7 Here is a proposal on how we intend to support multiple profiles. Please share your feedback on the same: Use the following format to specify the _profile_ argument. **s3//[email protected]:80/bucket-name/path/to/model**...
@damonmaria thank you for filing this issue. Can you provide a detailed repro with a minimal example so that we can add additional handling for such cases?
@jbkyang-nvi can you have a look at this?
@naor2013 do you mean you used the TensorRT backend or the TensorRT optimization option for OnnxRuntime backend? Can you share the model and perf analyzer configuration you used?
@naor2013 thank you for sharing the same. Can you also share the logs from perf analyzer that show the throughout and detailed breakdown of latency?
@naor2013 thank you for sharing all the information. We will investigate this once we have the bandwidth to do so.
cc @nv-kmcgill53 can you try to build a version of 22.02 on Jetson with this branch instead of r22.02?
@ZhuYuJin we have made a note of your request for adding such a feature.
@ashrafguitoni (and other users requesting this feature) are you looking to only modify the instance group via this functionality? if so Triton would need to re-work the model control workflow...