onnxruntime_backend icon indicating copy to clipboard operation
onnxruntime_backend copied to clipboard

How to inference with model(onnx) converted by MMdeploy?

Open Monalsingh opened this issue 3 years ago • 0 comments
trafficstars

I am trying to use MMpose in the Nvidia triton server but it does not support PyTorch model, it supports torchscript and ONNX, and a few others. So, I have converted MMpose mobilenetv2 model to ONNX using MMdeploy.

My questions are:

  1. How to use the converted (ONNX) model in the MMpose framework?

Triton uses its own way to inference the model. Example:

> triton_client.infer(model_name,model_version=model_version,
> inputs=input, outputs=output)

MMdeploy uses its own way to inference the model: Example:

> from mmdeploy_python import PoseDetector
> detector = PoseDetector(
> model_path=args.model_path, device_name=args.device_name, device_id=0)
  1. How am I suppose the load the model using Triton way and not using PoseDetector function by mmdeploy?

I am stuck here from long time bodypose_triton .

Monalsingh avatar Aug 03 '22 06:08 Monalsingh