relax
relax copied to clipboard
[DISCUSS] Some thoughts on the shape pattern matching
I moved the discussion from tvm forum to here because relax has not been upstreamed yet.
In relax wiki,
value = match_shape(lhs, pattern)
did a pattern matching that assigns the shape of lhs
to pattern
.
And when you define something like
lv0: R.Tensor[(n, m)] = R.match_shape(x, (n, m))
n
and m
are defined on the right side and the (n,m)
on the type annotation is used for assertion.
However, in most programming languages, variables are defined on the left of =
. I wonder can we make the pattern matching happen on the left side of =
?
In the following example:
x: R.Tensor[(n, m)] = y + 1
we assign the value of y + 1
to x
, and assign the shape of y
to (n,m)
.
When the same value appeared twice, we do an assertion check:
x: R.Tensor[(n, m)] = y + 1 # declare x, n and m
z: R.Tensor[(n, m)] = y - 1 # check whether y - 1's shape equals (n, m)
Also, this could simplify the dynamic shape matching:
# data dependent case
lv5: R.Tensor[_, "f32"] = R.unique(lv4)
# re-match shape
lv6: R.Tensor[(m,), "f32"] = R.match_shape(lv5, (m,))
can be simplified as:
lv6: R.Tensor[(m,), "f32"] = R.unique(lv4)
I agree that this looks cleaner and I think we considered this approach during the design. One thing to think about is how we would match the dimensions without binding the left to a tensor, for example currently we could write
# my_shape: Shape
R.match_shape(my_shape, (n, m))
one option is to overload the binding syntax with something like
(n, m) = my_shape
although this seems a bit strange if n
and m
are already bound before.
great discussions, from syntax(sugar) pov they makes sense. We can make these pieces de-sugar to match_shape.
On the other hand from the IR pov we will need to differentiate between match(which defines or creates an assertion) and x.shape field(an invariance due to inference). So we still likely need an explicit match_shape construct
see more updates in match cast #293