Jacky
Jacky
It is hard for me to tell if the outcome is correct or not, but I think you can take a look at [`ReadDataFromJson()`](https://github.com/triton-inference-server/server/blob/main/src/http_server.cc#L519) which converts the input `"data" :...
Hi @vkadlec, the hash for eigen-3.4.zip file does not match the one expected. You might want to make sure the file downloaded is the correct one. Alternatively, you can also...
Yes, `nvcr.io/nvidia/tritonserver:23.10-py3`, for example, is the right container.
Hi @zchenyu, can you try replicating the issue on a pre-build Triton container? It is because we may not be able to provide support for custom build Triton server. You...
Thanks for more information on reproduction. I have filed a ticket for us to investigate further.
Hi @jsoto-gladia, can you provide a minimum reproduction with the client script and model? We would like to see the details on how the issue is encountered, so we can...
Hi, we have enhanced the logic to wait until all ongoing HTTP connections to complete before starting to unload all models during shutdown, and new connections will be refused after...
Thanks for the enhancement suggestion. I have filed a ticket for us to investigate further. DLIS-5052
I wonder if the memory usage would come down if the model is unloaded (i.e. via the unload API). cc @tanmayv25 if the memory usage is expected.
Hi @iliakur, you can set the log format to ISO8601 when launching the server. https://github.com/triton-inference-server/server/blob/main/src/command_line_parser.cc#L598