jetson-inference icon indicating copy to clipboard operation
jetson-inference copied to clipboard

ONNX from Azure CustomVision is not able to use

Open falinuxKr opened this issue 2 years ago • 4 comments

Hi Dusty,

This ONNX is exported from Azure CustomVision with FP16. As its property says it uses SSD as a postprocess method, I think this model could be run with jetson-inference. But when I do what I think, it shows very long logs and error messages. Could you give me an advice what's wrong and what to do here:

I attach an onnx model, a netron screenshot and logs. Thanks in advance:

jetson-inference_error.txt

model.zip

image

falinuxKr avatar Mar 29 '22 07:03 falinuxKr

Hi @falinuxKr, this model has different outputs than the ssd-mobilenet.onnx that train_ssd.py creates. Namely, it appears that your model has 3 output layers (instead of the 2 output layers that jetson.inference.detectNet is already configured to work with)

You may be able to get it working by adapting the pre/post-processing code in jetson-inference/c/detectNet.cpp, but I'm not sure of the changes required.

dusty-nv avatar Mar 29 '22 14:03 dusty-nv

Thanks @dusty-nv,

Then how about using it with DeepStream?

falinuxKr avatar Mar 30 '22 01:03 falinuxKr

@falinuxKr I don't know if you've fixed this, but check your output layers. I got the same error by trying to use an custom ONNX for jetson.inference.segNet which had more than one output layer. When I removed the unnecessary output layers, everything worked fine.

tiborbs1 avatar Apr 27 '22 13:04 tiborbs1

@tiborbs1 Thanks for your idea.

falinuxKr avatar Apr 28 '22 01:04 falinuxKr