Katherine Yang

Results 100 comments of Katherine Yang

> So on the balance of maintainability vs correctness, I don't think we are compromising too much here. Just my 2cents. Hi @chajath since the newer versions still have this...

looks like there's a conflict somehow. Can you run the common/tools/format.py on this one?

Hi @AkSino can you also share your input model config? cc: @tanmayv25 since I could not have multiple classifications on the [model configuration documentation](https://github.com/triton-inference-server/server/blob/main/docs/user_guide/model_configuration.md)

Also I noticed you are using 21.10. That was from almost a year ago. Our newest release is 22.08. Can you try that?

@zbh0323 note the feature of `--load-models=*` was added a few months ago [PR here](https://github.com/triton-inference-server/server/pull/4256/). If you would like to keep using 21.10, you will need to specify all the models...

Hello @vkotturu where are you adding the logging? Can you share the model.py and the client.py you are using?

Hi, please read the [quickstart guide](https://github.com/triton-inference-server/server/blob/main/docs/getting_started/quickstart.md) to see details for the Tritonserver. I believe there are users that have used yolo models with Triton. We require the data to be...

https://developer.nvidia.com/deepstream-sdk to learn more about deepstream

Hi yes this is possible by doing either: 1. Use the client to post process the output from the first model and then send the processed output to the input...

Hello, what is the problem you are facing when doing this? We haven't tested this feature yet but there are alternatives you can do. 1. You can infer using multiple...