server
server copied to clipboard
ONNX CUDA session not working in python backend
Bug Description The ONNX CUDA session is not working in the Python backend. When attempting to run inference using the ONNX model with CUDAExecutionProvider, the session fails to initialize or execute properly.
Triton Information Triton version: 22.07 Using Triton container: [Yes]
To Reproduce Steps to reproduce the behavior: https://github.com/jsoto-gladia/onnx-in-python-backend
When I use CPUExecutionProvider, everything works fine When I use CUDAExecutionProvider, i get the following lines of error
I1018 20:11:24.764594 1 python_be.cc:2248] TRITONBACKEND_ModelInstanceFinalize: delete instance state
I1018 20:11:24.773726 1 python_be.cc:2087] TRITONBACKEND_ModelFinalize: delete model state
E1018 20:11:24.773855 1 model_lifecycle.cc:626] failed to load 'onnx_in_python_backend' version 1: Internal: Stub process 'onnx_in_python_backend_0' is not healthy.
I1018 20:11:24.773900 1 model_lifecycle.cc:755] failed to load 'onnx_in_python_backend'
Expected behavior The ONNX model should initialize and execute properly using the CUDAExecutionProvider, leveraging GPU acceleration for inference.