Nirvedh Meshram
Nirvedh Meshram
> @nirvedhmeshram We can add a few more. Do you have some specific shapes you are interested in? @kuhar actually on close inspection I do see some shapes that go...
> Lets fold this into the subseqent PR? Hard to see what this flag is for. Ya that is fine by me, Quinn suggested not to do too much at...
> Ah, interesting, could you add this shape to iree-kernel-benchmark? We can tag it as 'corner_case'. @jakub that shape is already generated by the benchmark so I was saying no...
Not needed anymore.
@pdhirajkumarprasad I am not able to get the error mentioned in the issue, for me this is failing in `convert-torch-onnx-to-torch` the code for which lives in torch-mlir , [here](https://gist.github.com/nirvedhmeshram/190ac2283af469121694351bd4e883a0) is...
This is an issue in a unpack + elementwise dispatch. the elementwise gets tiled but the unpack does not, [here](https://gist.github.com/nirvedhmeshram/78976e802ce86e22d3a27933b89d7c4d) is the IR dump for the dispatch @hanhanW any idea...
good point, which made me notice, isnt this unpack wrong? ``` %unpack = tensor.unpack %0 outer_dims_perm = [0, 1] inner_dims_pos = [0, 1] inner_tiles = [8, 4] into %1 :...
This issue seems to be a friend of [#18603](https://github.com/iree-org/iree/issues/18631) we have ``` %115 = linalg.generic {indexing_maps = [#map1, #map1, #map1, #map1], iterator_types = []} ins(%112, %113, %114 : tensor, tensor,...
> This unpack is valid because there is extract_slice semantic in unpack ops. You can think that it is an inverse operation of pack op. The pack op has padding...
> [@nirvedhmeshram](https://github.com/nirvedhmeshram) I'll focus on getting the where.self op to return scalar arithmetic when possible. Sounds good, I will check if we want to support unpack with extract slice fusion...