optimizer
optimizer copied to clipboard
Actively maintained ONNX Optimizer
Hello, I have a very large model (>2GB) which I would like to optimize using this library. Unfortunately I cannot shrink my model under 2GB. Therefore while using the optimization,...
700Mb model exceed over 2gb(protobuf not support) when execute opt for onnx,any way to solve this out? @daquexian
I'm coming across an issue with the `fuse_matmul_add_bias_into_gemm` when there is just a matmul node and no add or bias node after. I've written a quick example of what the...
Hi, When using fuse_matmul_add_bias_into_gemm I expect that even with batch size the layers will fuse. Apparently, this is not supported. I can't see what is the reason for this. If...
Hi, # python3 -m onnxoptimizer company_model.onnx company_model_opt.onnx /usr/bin/python3: No module named onnxoptimizer.__main__; 'onnxoptimizer' is a package and cannot be directly executed thx
@ Now have merge bn and conv to simplify, but don't have bn to fc, Could you support it in the future ?
fix https://github.com/onnx/optimizer/issues/59
I installed onnx/optimizer from source as mentioned. Then I run `python onnxoptimizer/test/optimizer_test.py`, I got the failure below: ```python [libprotobuf ERROR google/protobuf/descriptor_database.cc:644] File already exists in database: onnx/onnx-ml.proto [libprotobuf FATAL google/protobuf/descriptor.cc:1371]...
Hi Currently onnx optimizer has several passes for `Conv` op fusion like `fuse_add_bias_into_conv`. However, it seems like their support is only limited to `Conv` nodes. Would be nice if `ConvTranspose`...