jetson-inference
jetson-inference copied to clipboard
ONNX from Azure CustomVision is not able to use
Hi Dusty,
This ONNX is exported from Azure CustomVision with FP16. As its property says it uses SSD as a postprocess method, I think this model could be run with jetson-inference. But when I do what I think, it shows very long logs and error messages. Could you give me an advice what's wrong and what to do here:
I attach an onnx model, a netron screenshot and logs. Thanks in advance:
Hi @falinuxKr, this model has different outputs than the ssd-mobilenet.onnx that train_ssd.py creates. Namely, it appears that your model has 3 output layers (instead of the 2 output layers that jetson.inference.detectNet is already configured to work with)
You may be able to get it working by adapting the pre/post-processing code in jetson-inference/c/detectNet.cpp
, but I'm not sure of the changes required.
Thanks @dusty-nv,
Then how about using it with DeepStream?
@falinuxKr I don't know if you've fixed this, but check your output layers. I got the same error by trying to use an custom ONNX for jetson.inference.segNet
which had more than one output layer. When I removed the unnecessary output layers, everything worked fine.
@tiborbs1 Thanks for your idea.