Timothy Gerdes
Timothy Gerdes
Unfortunately Model Analyzer does not yet work with ensembles. It is on the roadmap
Hi @klappec-bsci, Perf Analyzer does work with ensembles. Please open a separate issue in https://github.com/triton-inference-server/server with more details (command you ran, error log, etc).
1) ensembles are not yet supported in Model Analyzer. We are getting much closer, but it is still a few releases away. >manifest for nvcr.io/nvidia/tritonserver:22.09-py3-sdk not found: manifest unknown: manifest...
Hi Helen, I'm sorry you are running into problems. I have never observed that error before. This is happening on a regular resnet model without any weird settings? The error...
This won't be in 22.04, but at least an early access version should be available in the next few releases
Hi @HarmitMinhas96, thanks for reporting this. I tried to reproduce, but ran into issues. I had to change the input type from TYPE_STRING to TYPE_FP32. After that, everything worked fine...
@HarmitMinhas96 it sounds like the only remaining issue is around having input types of UINT8 + STRING. This is a bug in perf_analyzer that was recently fixed and will be...
This is in the documentation link to supply custom data: ``` Note that (in the example above) the [4, 4] tensor has been flattened in a row-major format for the...
Any CLI options you want to pass from Model Analyzer to Perf Analyzer just need to go in the perf_analyzer_flags section: ``` model_repository: /workspace/models profile_models: prodNLU perf_analyzer_flags: shape: - input:1,26...
I have a PR up that tweaks the perf_analyzer_flags section a bit, and fixed all broken links. > Is it possible to monitor metrics of live models hosted on other...