tensorflow-onnx
tensorflow-onnx copied to clipboard
tranpose optimization for hardswish in tensorflow-lite
Describe the bug
When convering a tensorflow lite model with hardswish, specifically TFL_HARD_SWISH
operator. There's tranpose wrapper around the hardswish operator.
before conversion
after conversion
Urgency none
System information
- OS Platform and Distribution: Linux Ubuntu 20.04
- Tensorflow Version: 2.6.0
- Python version: 3.8.10
To Reproduce
Just use offical mobileV3 model or use the model below
v3-small_224_1.0_float.tflite.zip
with command line
python -m tf2onnx.convert --tflite ./v3-small_224_1.0_float.tflite --inputs-as-nchw input --opset 14 --output ./v3.onnx
And the model converted is v3.zip
I don't find the same shape Conv2d
op with the model graph you provide. Could you also check the model and send the repro conversion code?
I don't find the same shape
Conv2d
op with the model graph you provide. Could you also check the model and send the repro conversion code?
Sorry for the confusion that the pic was not from the model I uploaded. Now the question is rectified with model I converted.
Also the conversion is conducted in cli with command like python -m tf2onnx.convert --tflite ./v3-small_224_1.0_float.tflite --inputs-as-nchw input --opset 14 --output ./v3.onnx
You can reproduce the described issue with that step.
I don't find the same shape
Conv2d
op with the model graph you provide. Could you also check the model and send the repro conversion code?Sorry for the confusion that the pic was not from the model I uploaded. Now the question is fixed with model I converted. Also the conversion is conducted in cli with command like
python -m tf2onnx.convert --tflite ./v3-small_224_1.0_float.tflite --inputs-as-nchw input --opset 14 --output ./v3.onnx
Thanks, I closed this issue cause the question is fixed.
I don't find the same shape
Conv2d
op with the model graph you provide. Could you also check the model and send the repro conversion code?Sorry for the confusion that the pic was not from the model I uploaded. Now the question is fixed with model I converted. Also the conversion is conducted in cli with command like
python -m tf2onnx.convert --tflite ./v3-small_224_1.0_float.tflite --inputs-as-nchw input --opset 14 --output ./v3.onnx
Thanks, I closed this issue cause the question is fixed.
The fixed problem I refered to is not using the same model. The problem with tranposing still exists. Could you reopen the issue? @hwangdeyu