MaheshRavishankar

Results 155 comments of MaheshRavishankar

I am not sure we need this. For the record these pass existed and they resulted in some weird dependence between this pass and pass that forms dispatch regions. Stepping...

Can't this happen at the Vector dialect level. Is there a reason to do this in Linalg itself?

To me just looking at the form of the loop nest generated, it seems like a single loop nest. Having two generic ops seems like having two loop nests adjacent...

> > I'm not sure how this is related to this case. There is no `linalg.matmul` in what Diego is trying to solve. Sorry, had diverged in my mind from...

> Finding the neutral element is needed no matter what I believe. In the IR written by Diego it is there as well. Not sure. It depends on how vectorization...

Confirmed that the issue is because PR #9552 changes the workload computation to `ceilDiv(ub - lb, step)` instead of just `ub`. Both SPIR-V and CUDA do the same thing in...

That pass is to be removed this quarter. Ill check if that helps fix this issue. Thanks for the traige

Way to approach this would be to try to write the grouped convolution code using perfectly nested loop nest. From there it should be trivial to write the Linalg op...

> I think it is literally just a batch of regular convolutions. There is a wrinkle: the high level op from frontends typically has the channel dimension be a multiple...