Fix torch listconstruct errors when dependent on inputs flexible shapes
Recommodation for merge:
No.
There're some unit tests still needs to be added.
What it fixes
This PR is trying to fix #1921, along with other bugs that suffers the same reason.
The root cause is discussed in #1926, that when converting a torch.listconstruct, if any value in it depends on a dynamic shape which roots in the net input, the converted result is a bare python list and failed to parse in the subsequent op.
Design of this fix
This fix happends inside listconstruct parsing, runs a DFS from the current node to each parent layer all the way to the net root. And works on this asssumption:
If any value depends on a dynamic shape which roots in the net input, then there is an op
gather, that gathers that dynamic shape from net input dims
What it does and does not
- This fix does not affect the original branch for all compile-time scalar constants case
- This fix still holds the original branch for all unexpected case
- This fix addes another branch, only targets for an op
gather, and what it gathers from is not a name incontext.torch_graph.nodes, i.e. the net inputs.
memoization for DFS must be added to accelerate, otherwise it's way too slow for relatively larger networks.
Edit: Improvements mentioned above is added.
Hi @xorange, I left a comment in #1926. Would you prefer to go for general fix in this PR, or specific fix for your pad?