distiller icon indicating copy to clipboard operation
distiller copied to clipboard

Problem "torch.jit.trace()"

Open longzeyilang opened this issue 4 years ago • 6 comments

exeample: trace, _ = jit.get_trace_graph(model_clone, dummy_input, _force_outplace=True)

For classification, the dummy_input is easy and is only image shape? but in object detection,

How to modify the value of input_example for object detection such as Faster-RCNN? Because the input not only include image shape, but also include object information? such as x,y,w,h. and so on

longzeyilang avatar Apr 12 '20 10:04 longzeyilang

Hi @longzeyilang ,

Please add more information: which Distiller API are you trying to use?

Cheers, Neta

nzmora avatar Apr 12 '20 13:04 nzmora

StructureRemover in thinning.py

longzeyilang avatar Apr 12 '20 22:04 longzeyilang

In summary_graph.py, For: trace, _ = jit.get_trace_graph(model_clone, dummy_input, _force_outplace=True) how to modify the dummy_input for object detection such as Faster-RCNN?

longzeyilang avatar Apr 13 '20 07:04 longzeyilang

Hi @longzeyilang,

The 'jit.get_trace_graph' API accepts a list for the dummy_input input: "positional arguments to pass to the function/module to be traced", so passing multiple inputs is not a problem. However, there are a few caveats below:

summary_graph.py uses the PyTorch JIT tracer and ONNX export functionality to convert an ONNX IR representation of the graph. This representation is then queried to learn about the details of the computation graph. We use ONNX graphs because we found that they represent the major computation blocks, and we don't care about many of the operation details which are present in a PyTorch JIT trace (e.g. padding). The limitation of using the ONNX IR, is that not all PyTorch models can be exported to ONNX. See for example this PyTorch PR to export Mask RCNN to ONNX. Another issue that can come up when using summary_graph.py on arbitrary graphs, is that Distiller currently only supports memory and compute accounting for a small number of compute operations (e.g. convs, linear) so the code may break.

Cheers, Neta

nzmora avatar Apr 13 '20 11:04 nzmora

Hi @longzeyilang,

The 'jit.get_trace_graph' API accepts a list for the dummy_input input: "positional arguments to pass to the function/module to be traced", so passing multiple inputs is not a problem. However, there are a few caveats below:

summary_graph.py uses the PyTorch JIT tracer and ONNX export functionality to convert an ONNX IR representation of the graph. This representation is then queried to learn about the details of the computation graph. We use ONNX graphs because we found that they represent the major computation blocks, and we don't care about many of the operation details which are present in a PyTorch JIT trace (e.g. padding). The limitation of using the ONNX IR, is that not all PyTorch models can be exported to ONNX. See for example this PyTorch PR to export Mask RCNN to ONNX. Another issue that can come up when using summary_graph.py on arbitrary graphs, is that Distiller currently only supports memory and compute accounting for a small number of compute operations (e.g. convs, linear) so the code may break.

Cheers, Neta

But in the latest version of torch ,that is torch1.5.0. I can't fold BN . "torch.onnx._optimize_trace()" and "jit._get_trace_graph()" in summary_graph.py doesn't work correctly.

dongzhen123 avatar May 06 '20 08:05 dongzhen123

Hi, did you have any luck with thining Faster RCNN? I am struggling with the same error

cygerts avatar Nov 19 '20 10:11 cygerts