keras-onnx
keras-onnx copied to clipboard
Explicitly set batch size of generate ONNX model
Hello, Is there a way to explicitly set batch size of onnx model starting from a tf.keras model?
I have a tf.keras model with (None, 256, 256, 1) input shape, when converted to onnx input shape becomes (N, 256, 256, 1). Is there a way to set N to some explicit value ?
Is there any reason that you want to set the batch size? The keras model itself has batch size None so we convert the onnx model with the unknown batch size N. You can use input with certain batch size at runtime.
Once I've generated onnx model I load it into OpenCV dnn module with C++.
OpenCV at this time doesn't support dynamic batching and some layers complain about N shape. I think it would be easier and faster to force batch size when exporting onnx model (obviously if this is supported).
It would be also useful for converting models to TensorRT, which has problems with dynamic batch sizes.
You can convert any Keras model to PyTorch and save it as onnx with dummy input then use TensorRT. It is time-consuming if the model has unknown architecture because you need to rewrite the Architecture on Pytorch But that will solve your problems. And as far i know there is no simpler way. Another option is to convert the model to mltool but the same cycle is needed
@codarag You can set the batch size in ONNX by implementing this code.
You would need to install ONNX to run this. You can do it by running
pip install onnx
import onnx
def change_input_dim(model):
# Use some symbolic name not used for any other dimension
sym_batch_dim = "N"
# or an actal value
actual_batch_dim = 1
# The following code changes the first dimension of every input to be batch-dim
# Modify as appropriate ... note that this requires all inputs to
# have the same batch_dim
inputs = model.graph.input
for input in inputs:
# Checks omitted.This assumes that all inputs are tensors and have a shape with first dim.
# Add checks as needed.
dim1 = input.type.tensor_type.shape.dim[0]
# update dim to be a symbolic value
#dim1.dim_param = sym_batch_dim
# or update it to be an actual value:
dim1.dim_value = actual_batch_dim
def apply(transform, infile, outfile):
model = onnx.load(infile)
transform(model)
onnx.save(model, outfile)
apply(change_input_dim, r"old_file.onnx", r"new_file.onnx")
This code has been borrowed from : https://github.com/onnx/onnx/issues/2182#issuecomment-513888258
@vinesin I think this code doesn't solve ops quesition. To specific onnx batch size just using a fix randn tensor as dummy input. but what ops want is dynamic batch size.