Ramiro Leal-Cavazos
Ramiro Leal-Cavazos
> I assume we will also add the python wrapper to point to the custom ops from python ? Yeah, the Torch-MLIR Python API will have a way to specify...
I think the code that generates the indexing maps for the `linalg.generic` op is not handling zero-rank tensors correctly.
I think this line should be using the result rank variable `rank` rather than the operand `type.getRank()`: https://github.com/llvm/llvm-project/blob/86bf43d2ab1334af1ca7cb10d407b5afe19fc65f/mlir/lib/Conversion/TosaToLinalg/TosaToLinalg.cpp#L617
> Finish rebase and uplift the llvm to #1502. But got a new build-test out-of-tree bug which is not from my patch. > > ``` > pip installing Pytorch.. >...
> It was happening in CI (because of a bogus cache entry) so I deleted that cache entry and restarted the build, which then passed. All is fine now, but...
> It sounds like this is the action item for this issue then? Any thoughts on Ramiro's alternative approach of rewriting the ops to just index directly? Adding the canonicalization...
As far as I know, you can only access `strip_overloads` through functorch
@dan-garvey I think maybe you forgot to stage some of the changes. All I see now in this PR is the deletion of the `torchscript_e2e_heavydep_tests` directory
Not sure if this was fixed already, but I think the problem is due to the fact that in Python a single element tuple needs to have a comma after...
Hi @Shukla-Gaurav, thanks for the nice explanation! This seems like a good approach to me.