onnxruntime_backend
onnxruntime_backend copied to clipboard
How to inference with model(onnx) converted by MMdeploy?
trafficstars
I am trying to use MMpose in the Nvidia triton server but it does not support PyTorch model, it supports torchscript and ONNX, and a few others. So, I have converted MMpose mobilenetv2 model to ONNX using MMdeploy.
My questions are:
- How to use the converted (ONNX) model in the MMpose framework?
Triton uses its own way to inference the model. Example:
> triton_client.infer(model_name,model_version=model_version,
> inputs=input, outputs=output)
MMdeploy uses its own way to inference the model: Example:
> from mmdeploy_python import PoseDetector
> detector = PoseDetector(
> model_path=args.model_path, device_name=args.device_name, device_id=0)
- How am I suppose the load the model using Triton way and not using PoseDetector function by mmdeploy?
I am stuck here from long time
.
.