serve icon indicating copy to clipboard operation
serve copied to clipboard

Make TorchServe multi framework

Open msaroufim opened this issue 4 years ago • 5 comments

We've been assuming so far that Torchserve can only work with Pytorch Eager mode or Torchscripted models but our current handler is general enough to make it possible to support ONNX models.

The idea is a hack one of our partners mentioned that involves

  1. Adding onnx as a dependency in docker file or requirements.txt
  2. Loading onnx model in initialize handler
  3. Making an inference in the inference handler

It may not necessarily be the best way to serve ONNX models but it lets people avoid having to use a different serving infrastructure for each different type of model

This is a good level 3-4 bootcamp task - the goal would be to

  1. Get a Pytorch model like Resnet 18
  2. Export it using ONNX exporter
  3. Run and inference with it in an ONNX handler and submit it as an example in this repo

msaroufim avatar Aug 19 '21 21:08 msaroufim

Hi,

Why this was completed, i.e. is there a doc/example for onnx?

ozancaglayan avatar Aug 11 '22 12:08 ozancaglayan

Hi @ozancaglayan not quite, we're now tracking this item in #1631 - @HamidShojanazeri has a promising proposal there to package configurations using the torch-model-archiver so please feel free to put any feedback on that issue. Thanks!

msaroufim avatar Aug 11 '22 21:08 msaroufim

@msaroufim we are also working on serving yolov7 using either ONNX or TensorRT through TorchServe. Are there any clear best-practices for that?

Repo: https://github.com/WongKinYiu/yolov7/tree/main/deploy/triton-inference-server

cc @saurav-cashify @abhinav-cashify

amit-cashify avatar Aug 25 '22 05:08 amit-cashify

@msaroufim I understand that it is possible to use TorchServe with ONNX and TensorRT. Is it encouraged or discouraged?

Should one expect better support moving forward or will TorchServe remain focused only on native PyTorch and TorchScript model serving and a platform like Triton be a better choice for deploying different model flavors?

amit-cashify avatar Aug 26 '22 08:08 amit-cashify

Hi @amit-cashify we want to encourage more use of ONNX and TensorRT and I'm personally working on making this as easy to use as possible. It took a while because we had a couple of proposals floating around #1631 but I think I have a better one and will experiment with it and run some benchmarks starting next week and will keep you posted on progress

msaroufim avatar Aug 26 '22 18:08 msaroufim

Hello @msaroufim

Thanks for your initiative! Would love to see Torchserve serving ONNX "out-of-the-box". Any feedback on these benchmarks?

joaquincabezas avatar Nov 14 '22 13:11 joaquincabezas

This was just merged, will be featured in next release today

msaroufim avatar Nov 14 '22 15:11 msaroufim