CHIEH

Results 7 comments of CHIEH

Would you like to provide a very simple example? I have no idea how to handle the callback in this part... Thank you!!

TRT cannot support UINT8 datatype. It means your model already used the `uint8` datatype. Check here: https://docs.nvidia.com/deeplearning/sdk/tensorrt-api/python_api/infer/FoundationalTypes/DataType.html

I have tried the latest GPU version (515) with Triton version `nvcr.io/nvidia/tritonserver:22.07-py3`, but it seems not working. ![image](https://user-images.githubusercontent.com/32332200/185358133-5561b854-c11b-4e5a-bb1b-4856ebe41344.png) I am not sure whether it will show the configuration in this...

Got it!! Thanks for your reply!! In the end of the day, I changed to using [streamlit-elements](https://github.com/okld/streamlit-elements) to resolve my requirements. However, using streamlit-option-menu can be easily and rapidly applied...

Sure. I will try it. Thank you. It is very weird that after I reboot the computer, it works well now. :O I will close the issue first, if next...

Thank you for your information! Here is the log from Triton v23.08 container. Seems work fine. ![2024-02-27_14-12](https://github.com/triton-inference-server/server/assets/32332200/aedc6f52-e3d0-408c-986c-7fab4a00450f)

hi @oandreeva-nv After I deployed a model, but encountered this error. And then the container was dead. ![2024-02-29_08-54](https://github.com/triton-inference-server/server/assets/32332200/765f6c4d-5aab-4f0c-8830-68c7121fa48b) Do you have any idea about it? Thank you!