Iman Tabrizian
Iman Tabrizian
Jetson support for Model Analyzer has not been added yet. We will update this issue whenever the support has been added.
Looks like the error is being returned from the model. It could be the sensitivity to data. Can you try adding the `input-data: zero` to the `perf_analyzer_flags` section? Are you...
It looks like the server is being closed. Can you start a `tritonserver` outside model analyzer and use `perf_analyzer -m --shape input_ids:359 --shape attention_mask:359 --input-data zero --measurement-mode count_windows` and share...
Thanks for providing further details. It looks like the onnxruntime backend is segfaulting. This issue is outside the scope of Model Analyzer. I'll be transferring this issue to the onnxruntime...
cc @pranavsharma @askhade
@aishanibhalla The error is coming from the model itself. Are you able to perform inference on the model outside Triton?
cc @szalpal @jantonguirao
Hello, This package is in the ex3ndr repositories. You can download it and compile it. But I can't communicate with the code. If you made it to send a telegram...
Is the client and server running on the same machine or no? Are you using the same container for both client and server or you are using different containers?
Hi @jhm0104666, Thanks for filing a detailed Github issue. The expectation of Triton's performance when running inferences over the network to match with local inference is wrong. The local inference...