Joe
Joe
LGTMです!
Should I implement this as Split-18 at the same time with https://github.com/onnx/onnx/pull/4389, or is it better to implement this as Split-19?
We can add custom handler of `ListUnpack` as `ListConstruct` here https://github.com/pfnet/pytorch-pfn-extras/blob/a27e3d4030dcdeeedef11ad2ca4cd022c47b45c2/pytorch_pfn_extras/onnx/pfto_exporter/export.py#L458-L463
PFTO seems to enable `onnx_peephole` by default, so the peephole optimization of `eraseListConstruct` and `eraseListUnpack` also runs in PFTO. However, currently, `onnx_peephole` optimization is placed after `run_symbolic_function` (in `generate_onnx_node`).
`torch.onnx.export` seems to automatically unroll list inputs. This is done in `torch.jit._get_trace_graph` https://github.com/pytorch/pytorch/blob/05ce013338b3882136eea394c37c57e29e43df1a/torch/jit/_trace.py#L95 This API is assumed to be internal and they recommend to use `torch.jit.trace` for public use. However,...
It seems `torch.onnx.export` exports sequence inputs when model is scripted (since script modules don't know the number of tensors in lists). So, sequence inputs is essential for scripted models.
todo: add test
Update: I summarized the problem of extending current `ppe.onnx.export` for models with list inputs at https://github.com/pfnet/pytorch-pfn-extras/issues/572. In short, `torch.onnx.export` automatically unrolls list inputs in their internal trace API (`torch.jit._get_trace_graph`). However,...
With 6c732649a7ffda442d9c3beaf12427f07e2f0375, onnx export with `use_pfto=False` works well (because then `pee.onnx.export` will delegate to `torch.onnx.export` and list inputs are automatically unrolled, so it makes sense to unroll inputs to generate...