netron icon indicating copy to clipboard operation
netron copied to clipboard

Tensor shape inference

Open lutzroeder opened this issue 7 years ago • 14 comments

lutzroeder avatar Feb 04 '18 08:02 lutzroeder

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

Flamefire avatar Jun 08 '18 14:06 Flamefire

hope support shape infer. 😆

ysh329 avatar Dec 03 '18 11:12 ysh329

this would be very useful...

lovettchris avatar May 16 '19 18:05 lovettchris

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

it does work on onnx, but is there any way to add infer shape on mxnet json/model?

suntao2012 avatar Jun 06 '19 03:06 suntao2012

@suntao2012 Netron runs in the browser without any Python dependencies.

lutzroeder avatar Jun 06 '19 03:06 lutzroeder

It would be excellent if netron would be able to show the shape of the activations between two layers. Similar to the way it shows the shape of the input layer: Screenshot 2019-11-14 at 11 01 48

This would be incredibly useful when developing models and seems to be pretty viable to implement for the Keras backend since all the Tensor shape information should be available.

lgeiger avatar Nov 14 '19 10:11 lgeiger

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

Very helpful, thank you @FlameFire.

Left is without shape inference. Right is with shape inference.

Screen Shot 2020-03-04 at 8 18 02 PM

One important thing to note. Currently, for the above to work, you must use opset version <9. The above was generated with opset version 8. I checked opset 7 also. Both worked fine.

At present, for opset >=9, shapes will not be included and will not show as pointed out here:

Ah, you mean for opset9 or better. That change basically removed constants from inputs since inputs are not constants. In older onnx versions constants had to be part of the inputs, in opset9 that changed. Possible a onnx issue. https://github.com/onnx/tensorflow-onnx/issues/674#issuecomment-523965804

dsplabs avatar Mar 05 '20 04:03 dsplabs

@dsplabs thank you for the comment, and it's very helpful!

My understanding is that we can't set opset version when exporting Pytorch to ONNX, right?

lookup1980 avatar Mar 07 '20 03:03 lookup1980

@lookup1980 yes we can, by setting the opset_version argument, e.g.:

torch.onnx.export(model, model_inputs, onnx_file, opset_version=8)

Works for me in PyTorch version 1.4.

If you need to convert existing ONNX file, you can do so using: onnx.version_converter.convert_version(...). I usually throw-in also onnx.utils.polish_model(...) which (among other things) does shape inference using onnx.shape_inference.infer_shapes(...), e.g.:

import onnx
import onnx.utils
import onnx.version_converter

model_file = 'model.onnx'
onnx_model = onnx.load(model_file)
onnx_model = onnx.version_converter.convert_version(onnx_model, target_version=8)
onnx_model = onnx.utils.polish_model(onnx_model)
onnx.save(onnx_model, model_file)

dsplabs avatar Mar 11 '20 04:03 dsplabs

Does not work for me...

model:

    model = keras.applications.mobilenet.MobileNet(
        # input_shape=(32, 32, 1),
        input_shape=(224, 224, 3),
        weights=None,
        include_top=False,
    )

    model.compile(optimizer=keras.optimizers.Adam(learning_rate=learning_rate),
                  loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
                  metrics=['accuracy'])


Code:

keras_model = model

onnx_model = onnxmltools.convert_keras(keras_model)

onnx_model = onnx.shape_inference.infer_shapes(onnx_model)

p = dtemp / 'temp.onnx'
# onnxmltools.utils.save_model(onnx_model, str(p))
onnx.save(onnx_model, str(p))

Result:

image

Thanks for any suggestions!

fzyzcjy avatar Mar 19 '20 07:03 fzyzcjy

This does not even require much work for ONNX: You can run the shape_inference method on the ONNX model which populates model.graph.value_info which can be read afterwards.

If you don't want to do that in Netron, let the user do that first and have it stored in the model:

import onnx
from onnx import shape_inference
path = "..."
onnx.save(onnx.shape_inference.infer_shapes(onnx.load(path)), path)

@Flamefire Running this kills jupyter kernel. I was trying to load and save an inception model in this case.

sizhky avatar Mar 29 '20 06:03 sizhky

Nice feature addition. Some of the segmentation models need OpSet=11. Is there as way we can get this working for OpSet=11?

soyebn avatar May 01 '20 16:05 soyebn

Is there any estimate on when this feature could be added? I see it was requested long time ago, and it would be extremely useful! I see there is a workaround above for ONNX by creating a new version of the model with layer sizes. Is there a similar workaround for Keras .h5 files?

kobygold avatar Feb 08 '23 12:02 kobygold

@kobygold Keras files do not store inferred shapes. If you want to work on implementing this for Keras, acuity.js and darknet.js already have some support for reference.

lutzroeder avatar Feb 09 '23 03:02 lutzroeder