Kris Hung
Kris Hung
Hi @jesuino, I went through all the steps described in the README ["Serve a Model in 3 Easy Steps" section](https://github.com/triton-inference-server/server#serve-a-model-in-3-easy-steps) and was not able to reproduce this issue. From the...
Hi @rnwang04, what model are you using and how do you send the request? Are you using our own client with the same command `/workspace/install/bin/image_client -m densenet_onnx -c 3 -s...
Thanks for confirming, @rnwang04. I followed the exact same commands you shared but still could not reproduce this issue. Filed a ticket for the team to investigate this further.
Hi @vkotturu, the logger support is scheduled for the 22.09 release, which is coming soon. To use the logger right now, you need to compile Python backend from source.
Hi @purvang3, does the same model work (using the exact same model configuration file) if you just use the TF model (without TF-TRT optimization)? Besides, there is a specific version...
Closing issue due to lack of activity. Please re-open the issue if you would like to follow up with this.
Yes, this should be fixed in our 22.09 release. @ethanyys The reason why it's not working should have something to do with the format of the configuration. For example, the...
Hi @alxmamaev, I'm not really an expert here but I think this is something called stub shared library, which can be used to link against with some certain interfaces provided...
Hi @frankxyy, does the ensemble model containing a bls model that you are running works with the original libtritonserver.so? Could you also share the the version of Triton you're using,...
Hi @frankxyy, since the issue only occurs when using the modified `libtritonserver.so`, I think it would make more sense to investigate the part with new changes. One thing on the...