Redundant memref.alloca formation.
What happened?
Output IR: https://gist.github.com/pashu123/0f41d93c12826be20756d40878d3b6ec with memref.alloca
Steps to reproduce your issue
Input IR: https://gist.github.com/pashu123/9809ecbe759acbed960ec741eb5cce1a
Manually changed IR: https://gist.github.com/pashu123/4098730c139685afdb88ecd4e5023e36
Command: iree-opt --pass-pipeline="builtin.module(func.func(iree-codegen-iree-comprehensive-bufferize), cse )" ~/test.mlir
What component(s) does this issue relate to?
No response
Version information
No response
Additional context
No response
Did you try canonicalize,cse,canonicalize after bufferization? I remember there are issues to eliminate the allocation. The workaround is running these three passes after bufferization.
https://github.com/iree-org/iree/blob/0f15c8df0e8f61ecb5e5755a5df00a535648a5f9/compiler/src/iree/compiler/Codegen/Common/IREEComprehensiveBufferizePass.cpp#L249-L254
It didn't work in this case.
Why do we have a pack op that has dynamic inner tile sizes? I'm worried that you are solving an issue that is outside of our work's scope.
tensor.pack %extracted_slice_2 inner_dims_pos = [0, 1] inner_tiles = [%39, %39]
Why do we have a pack op that has dynamic inner tile sizes? I'm worried that you are solving an issue that is outside of our work's scope.
tensor.pack %extracted_slice_2 inner_dims_pos = [0, 1] inner_tiles = [%39, %39]
I am just trying to make these test cases work: https://github.com/iree-org/iree/blob/672ae82a5630439f97f405c09376ba1070b86f9e/tests/e2e/tensor_ops/pack_dynamic_inner_tiles.mlir#L21 😄
I don't see any memref.alloca ops in the dump: https://gist.github.com/hanhanW/bcb62c3e182463cd2cea18bea00cfd63
closing the issue.