sqhao
sqhao
You can set the static shape after input name inside `[]` when using `tf2onnx` by `--inputs` parameter
The pattern of `fuse_matmul_add_bias_into_gemm ` is that: ``` // Z = MatMul(X, Y) // A = Z + Bias // After: // A = Gemm(X, Y, Bias) The shape of...
@erelon The rank of `Matmul` input tensor must be `2` when using `fuse_matmul_add_bias_into_gemm `, you can find it in source
In [Gemm defination](https://github.com/onnx/onnx/blob/main/docs/Operators.md#Gemm), The shape of A should be (M,K) or (K,M), and the shape of B should be (K,N) or (N,K).
yeah, now onnxoptimizer don't support this feature, we will supoort it sooner
@fengzf-github You can use onnx-optimizer in python script . 1. Install onnxoptimizer. ``` git clone --recursive https://github.com/onnx/optimizer onnxoptimizer cd onnxoptimizer pip3 install -e . ``` 2. convert your model in...
@StayYouth1993 Optimizer has supported large model. You can use the latest version from source build.
Onnx-ptimizer has supported large model(exceed 2G). You can install the latest version from source build.
@michaelroyzen you could get more information fromm this [pr](https://github.com/onnx/optimizer/pull/73) . In the past, Large model(>2G) is loaded by `onnx.load(..., load_external_data=True)` and saved by `onnx.save(..., save_as_external_data=True)`, while this functions are implemented...
great, thanks