MaheshRavishankar

Results 155 comments of MaheshRavishankar

Could I get a day or so to review this (I am behind in my reviews)

The use of higher-D tensors (or reasoning about operations at a higher dimensional iteration space) really helped for fusion purposes, especially fusing with reshapes. I see this case as being...

That is expected behavior. It is up to the caller to ensure that the tile sizes are set such that the reduction dimensions are not tiled. We could enhance the...

@hanhanW I think what @benvanik suggested is what I was trying to describe. Sorry if that didnt come across clearly (thanks @benvanik for the nice summary!). So this would be...

I dont know. I am not working on it. I dont remember if I already fixed it... We can drop it down to P2 since its old.

> Not all reduction are scalars though. The zero-rank is just the degenerated case, but take for example (from the test-suite): > > ``` > func @reduce_valid(%arg0: tensor, %arg1 :...

> Is this actually a core issue about how we want to model mhlo.reduce going forward, or is this more about the mechanics of lowering to linalg? It feels like...

> > Is this actually a core issue about how we want to model mhlo.reduce going forward, or is this more about the mechanics of lowering to linalg? It feels...

Agreed. Except the op name should be "ShiftRightLogical" . The auto-gen-ed serialization/deserialization relies on this being the case. See here : https://github.com/tensorflow/mlir/blob/b3f8dd39a739d1652524188628a07901eccb8676/include/mlir/Dialect/SPIRV/SPIRVBase.td#L1253

Sorry for chiping in late here, but I think a good place to put this would be in the conversion framework into SPIR-V. Essentially make sure all the type conversion...