netron
netron copied to clipboard
Tensor shape inference
This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info
which can be read afterwards.
If you don't want to do that in Netron, let the user do that first and have it stored in the model:
import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)
hope support shape infer. 😆
this would be very useful...
This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates
model.graph.value_info
which can be read afterwards.If you don't want to do that in Netron, let the user do that first and have it stored in the model:
import onnx from onnx import shape_inference path = "..." onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)
it does work on onnx, but is there any way to add infer shape on mxnet json/model?
@suntao2012 Netron runs in the browser without any Python dependencies.
It would be excellent if netron
would be able to show the shape of the activations between two layers. Similar to the way it shows the shape of the input layer:
This would be incredibly useful when developing models and seems to be pretty viable to implement for the Keras backend since all the Tensor shape information should be available.
This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates
model.graph.value_info
which can be read afterwards.If you don't want to do that in Netron, let the user do that first and have it stored in the model:
import onnx from onnx import shape_inference path = "..." onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)
Very helpful, thank you @FlameFire.
Left is without shape inference. Right is with shape inference.

One important thing to note. Currently, for the above to work, you must use opset version <9. The above was generated with opset version 8. I checked opset 7 also. Both worked fine.
At present, for opset >=9, shapes will not be included and will not show as pointed out here:
Ah, you mean for opset9 or better. That change basically removed constants from inputs since inputs are not constants. In older onnx versions constants had to be part of the inputs, in opset9 that changed. Possible a onnx issue. https://github.com/onnx/tensorflow-onnx/issues/674#issuecomment-523965804
@dsplabs thank you for the comment, and it's very helpful!
My understanding is that we can't set opset version when exporting Pytorch to ONNX, right?
@lookup1980 yes we can, by setting the opset_version
argument, e.g.:
torch.onnx.export(model, model_inputs, onnx_file, opset_version=8)
Works for me in PyTorch version 1.4.
If you need to convert existing ONNX file, you can do so using: onnx.version_converter.convert_version(...)
. I usually throw-in also onnx.utils.polish_model(...)
which (among other things) does shape inference using onnx.shape_inference.infer_shapes(...)
, e.g.:
import onnx
import onnx.utils
import onnx.version_converter
model_file = 'model.onnx'
onnx_model = onnx.load(model_file)
onnx_model = onnx.version_converter.convert_version(onnx_model, target_version=8)
onnx_model = onnx.utils.polish_model(onnx_model)
onnx.save(onnx_model, model_file)
Does not work for me...
model:
model = keras.applications.mobilenet.MobileNet(
# input_shape=(32, 32, 1),
input_shape=(224, 224, 3),
weights=None,
include_top=False,
)
model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
Code:
keras_model = model
onnx_model = onnxmltools.convert_keras(keras_model)
onnx_model = onnx.shape_inference.infer_shapes(onnx_model)
p = dtemp / 'temp.onnx'
# onnxmltools.utils.save_model(onnx_model, str(p))
onnx.save(onnx_model, str(p))
Result:
Thanks for any suggestions!
This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates
model.graph.value_info
which can be read afterwards.If you don't want to do that in Netron, let the user do that first and have it stored in the model:
import onnx from onnx import shape_inference path = "..." onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)
@Flamefire Running this kills jupyter kernel. I was trying to load and save an inception model in this case.
Nice feature addition. Some of the segmentation models need OpSet=11. Is there as way we can get this working for OpSet=11?
Is there any estimate on when this feature could be added? I see it was requested long time ago, and it would be extremely useful! I see there is a workaround above for ONNX by creating a new version of the model with layer sizes. Is there a similar workaround for Keras .h5 files?
@kobygold Keras files do not store inferred shapes. If you want to work on implementing this for Keras, acuity.js and darknet.js already have some support for reference.