CAT
CAT copied to clipboard
ONNX Issue
Hi, I tried to run this model on iOS (using core ml) works fine, but when I did it for Android with onnx, the inference time went high to 180ms, whereas it runs around 20-30 on ios.
I tired to look into onnx export and found there are some issues.
Has anyone faced the same and how to fix this?
@mayhemantt I had issues with the instancenorm when I exported to ONNX, I solved them by using onnx-simplifier, which allowed me to build an engine using tensorrt.