python_backend
python_backend copied to clipboard
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
Triton server logging functionality can now be invoked from python models. Here are a few example formats: ```python logger = pb_utils.Logger logger.log("Specific Msg!", logger.INFO) logger.log_info("Info Msg!") logger.log_warn("Warning Msg!") logger.log_error("Error Msg!")...
core: https://github.com/triton-inference-server/core/pull/109 example: if model has below config and build with ENABLE_GPU=OFF: ``` name: "add_sub" backend: "python" input [ ... ] output [ ... ] # not providing instance group...
When using the python backend, if the shm-default-byte-size or shm-growth-byte-size is set larger than 2 * 1024 * 1024 * 1024 bytes, the tritonserver will fail to start Start command...
This commit introduces a debug mode for the Triton Python backend. When the environment variable `TRITON_DEBUG` is set to "1", the backend will import the `debugpy` module and start listening...
Add shared memory leak debugging introspection