server
server copied to clipboard
Triton Rust Crate as In-Process Inference Engine
Is your feature request related to a problem? Please describe. Rust API for Triton Server to integrate Triton in-process with a Rust Server Rust is now a universally recommended language to develop native close-to-metal high performance programs providing compile time guarantees of safety.
Describe the solution you'd like Generate Rust bindings using rust-bindgen based on tritonserver.h and leverage the inprocess library with our Rust server. I'm working on this in my free time and would be happy to contribute; however, if you have official plans of supports a Rust crate, that'd make my life easier.
Describe alternatives you've considered Integrating with tensorflow/rust and tch-rs crates directly and bypassing Triton. Not as desired since we would have to reimplement a bunch of inference server functionality that Triton provides out of the box.
Additional context None.
CC @ryanolson
Hi @asamadiya, while we don't have an official example at this time, I see there are some open-source projects that aimed to do this. One such example is here, which made bindings to the TRITONBACKEND APIs: https://github.com/xtuc/triton-rs. The author @xtuc may be able to chime in with more thoughts/details.
I'm working on this in my free time and would be happy to contribute
We are always happy to see contributions! In general we'd be looking for some testing and maintenance at a minimum to help keep the contribution or up to date as much as possible.