serve
serve copied to clipboard
Make TorchServe multi framework
We've been assuming so far that Torchserve can only work with Pytorch Eager mode or Torchscripted models but our current handler is general enough to make it possible to support ONNX models.
The idea is a hack one of our partners mentioned that involves
- Adding
onnxas a dependency in docker file or requirements.txt - Loading
onnxmodel in initialize handler - Making an inference in the inference handler
It may not necessarily be the best way to serve ONNX models but it lets people avoid having to use a different serving infrastructure for each different type of model
This is a good level 3-4 bootcamp task - the goal would be to
- Get a Pytorch model like Resnet 18
- Export it using ONNX exporter
- Run and inference with it in an ONNX handler and submit it as an example in this repo
Hi,
Why this was completed, i.e. is there a doc/example for onnx?
Hi @ozancaglayan not quite, we're now tracking this item in #1631 - @HamidShojanazeri has a promising proposal there to package configurations using the torch-model-archiver so please feel free to put any feedback on that issue. Thanks!
@msaroufim we are also working on serving yolov7 using either ONNX or TensorRT through TorchServe. Are there any clear best-practices for that?
Repo: https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server
cc @saurav-cashify @abhinav-cashify
@msaroufim I understand that it is possible to use TorchServe with ONNX and TensorRT. Is it encouraged or discouraged?
Should one expect better support moving forward or will TorchServe remain focused only on native PyTorch and TorchScript model serving and a platform like Triton be a better choice for deploying different model flavors?
Hi @amit-cashify we want to encourage more use of ONNX and TensorRT and I'm personally working on making this as easy to use as possible. It took a while because we had a couple of proposals floating around #1631 but I think I have a better one and will experiment with it and run some benchmarks starting next week and will keep you posted on progress
Hello @msaroufim
Thanks for your initiative! Would love to see Torchserve serving ONNX "out-of-the-box". Any feedback on these benchmarks?
This was just merged, will be featured in next release today