netron icon indicating copy to clipboard operation
netron copied to clipboard

Loop, If and Scan support

Open dshirron opened this issue 7 years ago • 13 comments

Is there roadmap for supporting operators of loop type: For example: ONNX Loop Caffe2 ONNXWhile

dshirron avatar Oct 25 '18 14:10 dshirron

Can you share a sample file for each and describe what support you are looking for?

lutzroeder avatar Oct 25 '18 20:10 lutzroeder

Currently Netron shows onnxwhile as an op without an option to explore the inner network which is a parameter in the onnxwhile op. It would be helpfull to be able to double click the op and see the inner network. The below code defines a simple caffe2 network which is using onnxwhile and converts it to onnx (this part currenly doesnt work since caffe2 onnx exporter doesnt support this yet).

`from caffe2.python import workspace, model_helper,core import numpy as np from caffe2.proto import caffe2_pb2

Now import the caffe2 mobile exporter

from caffe2.python.predictor import mobile_exporter

from caffe2.python.onnx import frontend import onnx

Create the initial input data

workspace.ResetWorkspace() max_trip_count = np.full(1,20).astype(np.int64) condition = np.full(1,True).astype(np.bool) first_init = np.full((1),1).astype(np.float32) second_init = np.full((1),1).astype(np.float32) workspace.FeedBlob("max_trip_count", max_trip_count) workspace.FeedBlob("condition", condition) workspace.FeedBlob("first_init", first_init) workspace.FeedBlob("second_init", second_init)

Create body net

body_net = caffe2_pb2.NetDef()

Two loop carried dependencies: first and second

body_net.external_input.extend(['i', 'cond', 'first', 'second']) body_net.external_output.extend(['cond_new', 'second', 'third', 'third','cond','cond']) add_op = core.CreateOperator( 'Add', ['first', 'second'], ['third'], ) print_cond = core.CreateOperator( 'Print', ['cond'], [], ) print3 = core.CreateOperator( 'Print', ['third'], [], ) limit_const = core.CreateOperator( 'ConstantFill', [], ['limit_const'], shape=[1], dtype=caffe2_pb2.TensorProto.FLOAT, value=1000.0, ) cond = core.CreateOperator( 'LT', ['third', 'limit_const'], ['cond_new'], broadcast=1 ) body_net.op.extend([add_op, print_cond,print3, limit_const, cond])

while_op = core.CreateOperator( 'ONNXWhile', ['max_trip_count', 'condition', 'first_init', 'second_init'], ['first_b', 'second_a', 'kabiba','kusa','asd'], body=body_net, has_cond=True, has_trip_count=True, save_scopes=0, )

main_net = caffe2_pb2.NetDef() main_net.op.extend([while_op]) main_net.external_input.extend(['max_trip_count', 'condition', 'first_init', 'second_init']) main_net.external_output.extend(['first_b', 'second_a', 'kabiba','kusa','asd'])

workspace_global_options = ['--caffe2_log_level=1'] workspace_global_options += ['--caffe2_print_blob_sizes_at_exit=0'] workspace.GlobalInit(['caffe2'] + workspace_global_options)

init_net, predict_net = mobile_exporter.Export(workspace, main_net, main_net.external_input) # Let's also save the init_net and predict_net to a file that we will later use for running them on mobile print("Saving caffe2 predict and init pb files...")
with open('init_net.pb', "wb") as fopen: fopen.write(init_net.SerializeToString()) with open('predict_net.pb', "wb") as fopen: fopen.write(predict_net.SerializeToString()) #with open('init_net.pbtxt', "w") as fopen:

fopen.write(str(init_net))

#with open('predict_net.pbtxt', "w") as fopen:

fopen.write(str(predict_net))

Convert to ONNX

onnx_value_info = { 'first_init': (onnx.TensorProto.FLOAT, first_init.shape), 'second_init': (onnx.TensorProto.FLOAT, second_init.shape), #'starts_tensor': (onnx.TensorProto.INT32, starts_tensor.shape), #'ends_tensor': (onnx.TensorProto.INT32, ends_tensor.shape), #'add_tensor': (onnx.TensorProto.INT32, add_tensor.shape), #'sqrt2_const': (onnx.TensorProto.FLOAT, sqrt2_const.shape), #'skip_add_out': (onnx.TensorProto.FLOAT, skip_add_out.shape), }

onnx_model = frontend.caffe2_net_to_onnx_model( predict_net, init_net, onnx_value_info, )

Run network

workspace.RunNetOnce(main_net) print(workspace.FetchBlob('kabiba')) print(workspace.FetchBlob('kusa')) print(workspace.FetchBlob('asd')) `

This sample is based on "onnxwhile" op testing code of caffe2 whith add of conversion code to onnx

dshirron avatar Oct 28 '18 08:10 dshirron

onnxwhile.zip

dshirron avatar Oct 28 '18 09:10 dshirron

Since caffe2 export to onnx doesnt support this op yet (I opened an issue in pytorch/caffe2 repo) i dont have an ONNX file.

dshirron avatar Oct 28 '18 10:10 dshirron

@lutzroeder is it now possible to review the contents of such while layers in Netron? Where can I find the example of such file? I tried the attached file and could not open it.

demid5111 avatar Dec 01 '18 08:12 demid5111

Hi, are you planning to also expand subgraphs in the future ? See an example here (IF node at the end of visible graph): https://github.com/caffe2/models/tree/master/mask_rcnn_2go/model/fp32

kfir-st avatar Apr 28 '19 15:04 kfir-st

issue_168.zip

lutzroeder avatar May 31 '19 05:05 lutzroeder

+1. It would be great to be able to visualize the sub-graphs in the ONNX models. Looking forward to this feature.

purshottamv avatar Aug 12 '19 23:08 purshottamv

+1 The models I'm currently working with start off with If operators and netron can't visualize the model at all, is there any current workaround for this?

IgnacioJPickering avatar Jul 04 '20 17:07 IgnacioJPickering

Here is an example of DLRM using loop operators. The symbol for the loop operators can be seen and the attributes show the graph value but none of the operators (gather, slice, etc.) inside the loop can be viewed (w/ Netron 4.5.5).

dlrm_s_pytorch.onnx.zip

image

The following shows the operators inside the top loop:

  %42 : Float(3, 32, strides=[32, 1], requires_grad=1, device=cpu) = onnx::Loop(%40, %181) # .../lib/python3.7/site-packages/torch/nn/functional.py:1993:0
    block0(%43 : Long(device=cpu), %cond.1 : bool):
      %45 : Tensor = onnx::Gather[axis=0](%32, %43)
      %46 : Tensor = onnx::Gather[axis=0](%37, %43)
      %47 : Tensor = onnx::Unsqueeze[axes=[0]](%45)
      %48 : Tensor = onnx::Unsqueeze[axes=[0]](%46)
      %49 : Tensor = onnx::Slice(%indices_0, %47, %48, %27)
      %50 : Tensor = onnx::Gather[axis=0](%emb_l.0.weight, %49)
      %51 : Tensor = onnx::ReduceSum[axes=[0], keepdims=0](%50)
      %52 : bool = onnx::Cast[to=9](%26)
      -> (%52, %51)

mneilly-et avatar Oct 13 '20 07:10 mneilly-et

Any update on this feature implementation?

harishch4 avatar Sep 06 '21 05:09 harishch4