Ryan McCormick
Ryan McCormick
CC @tanmayv25 this idea seams reasonable to me, what do you think? PR might need a couple tweaks and we'll need a signed CLA otherwise.
@huangyz0918 is this the full log/output? Can you share the full exact `docker run ...` command and the full `tritonserver --model-repository ...` command you ran to reproduce the issue? CC...
@Tabrizian filed DLIS-3765
Hi @flawaetz , I'm unable to reproduce this with our example [densenet_onnx](https://github.com/triton-inference-server/server/tree/main/docs/examples/model_repository/densenet_onnx) model or [simple](https://github.com/triton-inference-server/server/tree/main/docs/examples/model_repository/simple) tf model in 22.04 server/sdk containers. So a few follow-up questions: 1. Can you add...
> I also have this error while using r22.05 with python backend decoupled model (one request, multi response), if I turn on response_cache, error: response output count mismatch, I will...
> Just an update that I've tried to reproduce this issue using perf_analyzer with no success. I've also tried capturing and pickling the queries that generate response output count mismatch...
@Tabrizian @GuanLuo any thoughts on this? Is it possible to set some kind of dependency through any built-in means before warmup? Maybe in BLS/python model initialization, do a one-time wait...
@mc-nv any guidance on this?
Hi @bro-adm, I'll try to answer the questions in a few comments to keep it easier to read. **1. Trace Rate** > In the docs you set the trace-rate to...
**2. Trace/log frequency** > We have encountered a mysterious thing (triton 21.12-py3/21.09-py3), the logs of this trace feature aren't written to the trace json file until the tritonserver is exited...